Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR IDENTIFYING A PERSON
Document Type and Number:
WIPO Patent Application WO/2008/155447
Kind Code:
A2
Abstract:
This publication describes a method and system for identifying a person based upon the characteristics of his or her eye(s). Currently known identification methods are based upon capturing one or more images of a person's eye(s), after which said image(s) are analyzed in order to identify said person. Today's eye recognition is susceptible to various falsification and fraudulent undertakings, for example by means of a printed image or contact lens representing the iris of another person. In the identification method and system according to the present invention, characteristics of a person's eye(s) are observed in a continuous identification process, and based upon said observed characteristics of said eye(s), said person can be identified in a reliable manner.

Inventors:
LEHTO TIMO TAPANI (FI)
Application Number:
PCT/FI2008/000077
Publication Date:
December 24, 2008
Filing Date:
June 19, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LEHTO TIMO TAPANI (FI)
International Classes:
G06K9/00
Domestic Patent References:
WO2002007068A12002-01-24
WO2006088042A12006-08-24
Foreign References:
JP2006085226A2006-03-30
JP2006158827A2006-06-22
US20060029262A12006-02-09
US20060088193A12006-04-27
Download PDF:
Claims:
CLAIMS

What we claim as our invention is:

1. A method for identifying a person based upon characteristics of said person's eye(s), ch a racte rize d in that varying directions are continuously indicated to said person towards which said person's eye(s) should be turned wherein at least a portion of said directions are unknown prior to the identification procedure and at least a portion of said directions differ from the general direction(s) from which information on said person's eye(s) is captured; information on said portions of said person's eye(s) is continuously captured as said person's eye(s) turn(s) towards said directions, wherein said information comprises patterns on the exterior and/or interior of said person's eye(s); said patterns are continuously extracted from said captured information; and said person is identified based upon the structure of a synthesis of said patterns.

2. A method for identifying a person based upon characteristics of said person's eye(s) as recited in claim ^ characterized in that said directions are decided by an authenticating party.

3. A method for identifying a person based upon characteristics of said person's eye(s) as recited in claim 1, ch a ra ct eriz e d in that said authenticating party decides whether adequate information on said person's eye(s) has been captured and if so, said authenticating party decides, based upon information on said person's eye(s) captured and stored at a prior time, whether said captured information corresponds to information expected by said authenticating party to be captured when said person's eye(s) turns towards said directions.

4. A method for identifying a person based upon characteristics of said person's eye(s) as recited in claim 1, chara cteriz e d in that said information is captured in real-time.

5. A method for identifying a person based upon characteristics of said person's eye(s) as recited in claim 1, ch a ra cteriz e d in that said directions are indicated to said person via visual stimuli.

6. A system for identifying a person based upon characteristics of said person's eye cha ra cterize d in that an image capturing apparatus continuously indicates varying directions to said person towards which said person's eye(s) should be turned wherein at least a portion of said directions are unknown prior to the identification procedure and at least a

portion of said directions differ from the general direction(s) from which information on said person's eye(s) is captured; information on said portions of said person's eye(s) is continuously captured by said image capturing apparatus as said person's eye(s) turn(s) towards said directions, wherein said information comprises patterns on the exterior and/or interior of said person's eye(s); and said person is identified based upon the structure of a synthesis of said patterns.

7. A system for identifying a person based upon characteristics of said person's eye as recited in claim 6 ch ar a ct erize d in that said directions are decided by an authenticating party.

8. A system for identifying a person based upon characteristics of said person's eye as recited in claim 6 ch a r a cterize d in that said authenticating party decides whether adequate information on said person's eye(s) has been captured and if so, said authenticating party decides, based upon information on said person's eye(s) captured and stored at a prior time, whether said captured information corresponds to information expected by said authenticating party to be captured when said person's eye(s) turns towards said directions.

9. A system for identifying a person based upon characteristics of said person's eye as recited in claim ό ch a ra cte rize d in that said information is captured in real-time.

10. A system for identifying a person based upon characteristics of said person's eye as recited in claim 6 characterized in that said directions are indicated to said person via visual stimuli.

Description:

TITLE OF THE INVENTION

Method and System for Identifying a Person

TECHNICAL FIELD

The field of invention relates generally to methods and systems for identification of a person, and in particular to identification of a person using biometric information gathered from the person's eye or eyes.

BACKGROUND ART

This disclosure contemplates an enhanced method and system for reliable identification of a person using biometric information gathered from either or both of the person's eyes, which for the sake of convenience, is referred to as "eye recognition". Eye recognition, typically based upon recognizing vascular patterns of the retina and/or patterns within the iris that are practically unique to each person, is commonly considered one of the most reliable techniques for identifying a person under circumstances in which taking physical samples, e.g. for DNA analysis purposes, or physical imprints, such as dental patterns, is inappropriate due to time constraints or other prerequisites of the identification situation.

It is well known that humans have individual and distinct fingerprints. Methods and apparatus have been developed for identifying humans by reference to these fingerprints. Similarly, it is now becoming increasingly well known that the human iris and retina have a level of individuality and distinctiveness that exceeds even that of the human fingerprint.

Leonard Flom and Aran Safir originally showed that the iris is unique among individuals and patented their findings in United States patent US4641349 [1] of 1987. In a 1994 United States patent US5291560(A) [2] John G. Daugman further addressed the issue of iris pattern signal processing and correlation. In the same year, Richard Wildes et al. disclosed their own iris recognition method in a scientific article [3], and in 1996 a nearly identical group lead by Wildes further disclosed their invention in United States patent US5572596(A) [4]. Rodney Doster further disclosed an iris recognition apparatus and method in 1999 exploiting light reflected from the retina in United States patent US5956122(A) [5].

Methods and systems believed to comprise the capturing of multiple images of the iris are also known, such as South Korean patent application KR2001-0006976 by Dae H. Kim, Jun Y. Park and Jang S. Ryu [6]; United States patent application US2005/0281440(Al) by Fre-

derick A. Perner [7]; Japanese patent application JP2004-220376(A) by Tsutomu Nakazawa and Yoshio Ichihashi [8]; United States patent application US2006/0274919(Al) by Dominick Loiacono and James R. Matey [9]; South Korean patent application KRl 0-2005- 0009959(A) by Kwang H. Bae, Jai H. Kim and Kang R. Papk [10]; Japanese patent application JP2007-11710(A) by Yuichiro Takahashi [H]; PCT international application WO2006/088042(Al) by Shinichi Tsukahara [12]; United States patent application US2003/0012413(Al) by Takashi Kusakari and Koji Wakiyama [13]; and Japanese patent application JP2000-33080(A) by Takahiro Oda [14].

Methods and systems believed to comprise the capturing of multiple iris images and synthesizing these into one or more composite images are also known, such as PCT international application WO2005/008590(Al) by Takeo Azuma, Kenji Kondo and Masahiro Wakamori [15]; United States patent application US2005/0249385(Al) by Kenji Kondo and Takeo Azuma [16]; PCT international application WO2005/109344(Al) by Kenji Kondo, Takeo Azuma and Masahiro Wakamori [17]; French patent FR2884947(A1) by Martin Cottard and Gilles Monteilliet [18]; a system disclosed in a 2006 scientific publication by Byungjun Son, Sung-Hyuk Cha, and Yillbyung Lee [19]; and United States patent application US2003/0152252(Al) by Kenji Kondo, Takeo Azuma, Kenya Uomori and Yoshito Aoki [20].

Methods and systems believed to comprise guidance of the eye prior to and/or while capturing one or more images of the iris are also known, such as Japanese patent application JP2003-108983(A) [21] and United States patent application US2006/0120707(Al) [22] by Takashi Kusakari, Jyoji Wada and Tamotsu Kaneko; Japanese patent application JP2005- 157970(A) [23] by Takeshi Fujimatsu and Yoshito Aoki; United States patent US685O631(B1) by Takahiro Oda and Yuji Ohta [24]; Canadian patent CA2350309(Al) by James L. Cambier and Clyde Musgrave [25]; PCT international application WO00/30525(A2) by James T. McHugh, James H. Lee and Cletus B. Kuhla [26]; PCT international application WO01/20561(Al) by James L. Cambier and John E. Siedlarz [27]; United States patent US6333988(B1) by Christopher H. Seal, Gifford M. Merrick and David J. McCartney [28]; Japanese patent application JP2006-85226(A) by Motoko Tachibana [29]; United States patent application US2005/0129286(Al) by Christopher D. Hekimian [30]; Japanese patent application JP2006-158827(A) by Masahiro Wakamori [31]; Japanese patent application JP2006-181012(A) by Masahiro Wakamori and Kaoru Morita [32]; and PCT international application WO2006/052004(Al) by Shinichi Tsukahara [33].

Methods and systems believed to comprise retinal recognition are also known. Robert B. Hill disclosed some of the early inventions in this field in United States patents US4109237 [34], US4393366 [35] and US4620318 [36]. These were later followed by numerous inventions in the field, such as United States patents US6453057(Bl) [37] and US6757409(B2) [38] by John Marshall and David Usher; United States patent application US2002/0093645(Al) by Gregory L. Heacock [39]; PCT international application WO02/075639(Al) by Gregory L. Heacock and David F. Mϋller [40]; PCT international application WO02/07068(Al) by D. Beghuin, P. Chevalier, D. Devenyn, K. Nachtergaele and J.-M. Wislez [41]; United States patent application US2004/0202354(Al) by Takayoshi Togino [42]; and United States patent application US2006/0147095(Al) by David B. Usher, Gregory L. Heacock, John Marshall and David F. Mϋller [43].

Methods and systems believed to comprise both retinal and iris recognition are also known, such as United States patent application US2006/0088193(Al) by David F. Mϋller, Gregory L. Heacock and David B. Usher [44]; United States patent application US2005/0117782(Al) by Takuya Imaoka, Jyoji Wada and Toshiaki Sasaki [45]; Japanese patent application JP2006-350410(A) by Yoshito Aoki [46]; and Japanese patent application JP2005- 304809(A) by Masaru Ikoma and Tomoyoshi Nakaigawa [47].

Methods and systems believed to comprise eye movement recognition are also known, such as an experiment disclosed in a 2004 scientific publication by Pawel Kasprowski and Jozef Ober [48]; and PCT international application WO01/88857(Al) by Eric Lauper and Adriano Huber [49].

A method and system believed to comprise combined conjunctiva and iris recognition is also known: PCT international application WO2006/119425(A2) by Reza Derakhshani and Arun Ross [50].

Challenges for Contemporary Eye Recognition

Contemporary eye recognition techniques are believed to be susceptible to a variety of falsification methods used to produce a false identification result when identifying a person, such as staging a photograph of photographs of the iris(es) and/or retina(s) of another person in front of the eye recognition device, or by wearing a contact lens or lenses comprising a printed pattern or patterns of the iris(es) of another person's eye(s) during the eye recognition procedure. For example, in a 2006 article by Jinyu Zuo, Natalia Schmid and Xiaohan Chen

[51] a method was demonstrated for generating synthesized iris images that in most cases match the performance of images of real irises.

It is believed that in order to counter fraudulent identification and/or authentication attempts, a number of liveness tests are being developed for verification of the authenticity of the eye(s) subject to analysis, such as measuring spectrographic properties of tissue, fat, blood and melanin pigment of the eye(s), coaxial retinal back-reflection, reflections from corneal and lens surfaces, as well as involuntary eye behavior, such as pupillary unrest and pupillary light reflex, and voluntary eye behavior, such as eye motions and eyelid blinks on command.

Augmentation of the basic eye recognition procedure with numerous complex liveness tests is believed to further add to the complexity of the overall system. Such liveness tests are nevertheless also believed to be susceptible to falsification techniques. Hence, contemporary eye recognition systems requiring uncompromised identification accuracy are believed to be commonly supervised in person by one or more representatives of the authenticating party, and consequently eye recognition systems capable of identifying a person in unsupervised conditions with uncompromised accuracy are currently believed to be scarce or non-existent.

Furthermore, information transfer between the eye recognition device and authenticating party is also believed to be vulnerable to interception or interference. Contemporary eye recognition systems are believed to transmit image information captured from the eye(s) and/or numeric data derived from the image information to the authenticating party for verification, typically in encrypted format. When intercepted by an illicit third party, it is believed that this information can potentially later be used for false authentication purposes, in particular if the illicit third party is successful in decrypting the intercepted information.

It is further believed that by obtaining adequate information on the unique retinal vascular and iris patterns of the person, an imposter can use this information for false identification in any future biometric identification system requiring such information, as the victim cannot easily modify the patterns in his or her eyes, even if the interception was later detected.

Interception is of particular concern when transmitting information over wireless networks, such as but not limited to cellular networks, and public data networks, such as but not limited to the Internet. Some inventions have been disclosed that are believed to attempt to prevent fraudulent use of intercepted data. For example, United States patent application US2006/0133651(Al) by Andrew J. Polcha and Michael P. Polcha [52] is believed to disclose a method for identifying persons in such a manner that one or more biometrics are dis-

torted prior to detection and recognition, and in a system according to Japanese patent application JP2005-157970(A) by Takeshi Fujimatsu and Yoshito Aoki [23], the quality of the captured image is believed to be degraded prior to displaying this to the person.

Although it is believed that security within the contemporary retinal and/or iris identification systems can be enhanced by various methods, such as but not limited to encryption and decryption of transmitted data, as well as methods that attempt to verify that the eye subject to analysis is authentic and live, it is believed that each of these additional security methods are themselves susceptible to further countermeasures by illicit third parties.

It is also believed that users have frequently considered contemporary retinal scanning techniques intrusive, cumbersome and/or uncomfortable to use, due to direct eye contact, the person being required to focus on an alignment light for a number of seconds and/or the shining of bright light into or near to the center of the retina.

In addition, successful execution of the identification procedure is believed to commonly require: (1) the person subject to identification being instructed regarding the steps of the identification procedure, (2) the person understanding the instructions, and (3) the person being willing to and capable of abiding by the instructions.

Contemporary retinal scanning techniques are thus believed to be inappropriate for persons who cannot comprehend or execute given instructions such as (1) infants, (2) mentally disabled, (3) elderly persons who suffer difficulties with memory and/or comprehension, and (4) persons who cannot comprehend instructions due to illiteracy, hearing disability, the lack of language skills required by the authenticating party, and/or other reasons due to which the person cannot comprehend and/or execute given instructions.

It is further believed that in the case that the person subject to identification is unwilling to become identified, he or she may intentionally act in such a manner that the identification procedure systematically fails, e.g. by failing to align his or her eye(s) as instructed and potentially claiming inability to comprehend and/or execute given instructions.

Iris recognition techniques are believed to typically be less intrusive than those of retinal recognition, but it is further believed that such techniques encompass further challenges in the areas of privacy and security, due to the fact that the person's irises are visible and available to any party capable of capturing an accurate image of the person's eye(s). In addition, when the image of a person is captured from a distance and it thus comprises a variety of

auxiliary information in addition to the portion of the image that contains relevant information on the person's eye(s), the ability to locate and extract relevant portions of the image in a timely and reliable manner is still believed to pose a major challenge for modern technology-

In addition, eyelids and eyelashes in front of the person's eye(s), which may partially obstruct the view of the iris(es), are believed to constitute further challenges for these techniques.

Techniques using eye motion analysis for identifying a person are believed to be relatively new and to require further research in order to increase the reliability of the identification procedure. For example, the results of the 2004 experiment by Kasprowski and Ober varied considerably, e.g. the false rejection rate ranged from 6.7 to 43.3 percent.

Alarm mechanisms, such as those disclosed in United States patent application US2006/0072793(Al) by Gary E. Determan [53] and PCT international application WO2006/119425(A2) by Reza Derakhshani and Aran Ross [50] wherein an alarm is triggered by physical actions of a victim's eye and/or eyelid, such as specific eye movements, a predetermined blinking pattern by the victim's eyelid and/or closure of the victim's eyelid for a predetermined time, are believed to be appropriate for triggering a noticeable alarm, but to lack a secure means of initiating a silent alarm imperceptible by the aggressor.

For example, in the case that the aggressor is aware of the predetermined alarm triggering actions and is able to observe the eye and/or eyelid movements of the victim during the identification procedure (1) the aggressor may impose physical threats upon the victim in order to avoid triggering of the alarm and/or (2) the victim may be subject to physical retaliation by the aggressor if actions known to trigger an alarm are detected by the aggressor.

DISCLOSURE OF INVENTION

Definitions

In the following, the semantics of specific terms used in this disclosure are defined.

The terms "optical" and "light" refer to electromagnetic radiation in the infra-red, visible and ultra-violet portions of the electromagnetic spectrum, i.e. having wavelengths in the range 10 nm to 1 mm.

The term "exterior of the eye" refers to the outer surface of the eyeball, including but not limited to the conjunctiva, sclera, episclera, cornea, anterior chamber, iris and/or pupil.

The term "interior of the eye" refers to inner surface of the eyeball, including but not limited to the retina further comprising the macula, fovea, and optic disc.

The term "patterns" when relating to patterns on the exterior and/or interior of the eye refers to any visible features of the eye, including but not limited to vascular patterns and other features in the conjunctiva, sclera and/or episclera; features in the iris; and vascular patterns and other features in the retina, macula, fovea, and/or optic disc.

The term "synthesis" of patterns refers to combining a plurality of patterns to form one or more composite patterns, for example by means of various mathematical algorithms.

The term "eye motion" refers to rotational movement of the eyeball, the pupillary light reflex and/or the pupillary accommodation reflex.

The term "structure" in conjunction with patterns on the exterior and/or interior of the eye refers to the internal qualities, quantities and arrangement of features and patterns of the interior and/or exterior of the eye, which are unaffected by eye motion and/or the rotational position of the eye.

The term "eye motion characteristics" refers to the eye motion of a person, including but not limited to the paths along which the line-of-sight of the eye moves while the eye rotates; the response time to stimuli prior to eye motion; the velocity and/or extent of eye rotation; the velocity; extent and/or sensitivity of pupillary reflexes; and/or specific characteristics that are present during saccadic eye motion.

The term "direction" of the eye refers to the rotational bearing of the line-of-sight of the eyeball.

The phrase "general directions) from which information is captured" refers to the approximate direction(s) of the center(s) of the image input view(s) of the image capturing apparatuses) or the like.

The term "image input view" in conjunction with an image capturing apparatus refers to the portion of the image capturing apparatus onto which the optical image to be captured is projected.

The term "cellular network" refers to a long-range wireless network connecting mobile handsets used for communication via a plurality of networked base station apparatus, and which is interconnected to the public switched telephone network.

The term "computer network" refers to communication networks commonly used for interconnection of computer terminals, which are increasingly further interconnected using mainly packet-switched data transmission technologies, such as the Transmission Control Protocol (TCP), the Hypertext Transfer Protocol (HTTP) and the Internet Protocol (IP). Computer networks comprise a plurality of data cables, networking apparatus and increasingly base station apparatus providing wireless access to the network.

It is obvious to those skilled in the art that the distinction between mobile handsets and computer terminals is gradually diminishing, as modern mobile handsets may comprise computer network connectivity and modern computer terminals, which themselves are increasingly mobile, cellular network connectivity.

The term "WLAN" refers to a wireless local area network, which is a short-to-medium range wireless computer network.

The term "Internet" refers to globally interconnected computer networks.

The term "electronic portal" refers an electronic online service available to the public via one or more data networks and/or cellular networks.

The phrase "logging into an electronic portal" refers to a person proving his or her identity to the organization controlling the electronic portal by disclosing his or her user credentials, by using the identification method of this disclosure and/or by using other secure and reliable means of identifying himself or herself.

The term "private area" of an electronic portal refers to a portion of the electronic portal that is only accessible to a person who has logged into the electronic portal.

The term "digital key" refers to any sequence of data used for encryption and/or decryption purposes.

The term "Bluetooth ® " refers to the specification for the use of low-power radio communications to and from wirelessly linked phones, computers and other network devices over short distances, ratified as the IEEE 802.15.1 standard. Although several versions of Bluetooth ® technology co-exist (1.0, 1.1, 1.2 and 2.0 to date), no preferences are taken in this disclosure regarding the version of Bluetooth to be used in conjunction . with the disclosed invention, and the term "Bluetooth ® " may further refer to any future version of Bluetooth ® technology or the like.

The term "UWB" refers to the Ultra- Wideband (UWB) communication technology for transmitting information spread over a large bandwidth, in which the signal bandwidth emitted from the transmitting antenna exceeds the lesser of 500 MHz or 20% of the center frequency.

The term "USB" refers to Universal Serial Bus specification, which is a widely used hardware interface for attaching peripheral devices. Although several versions of USB technology co-exist (1.0, 1.1 and 2.0 to date), no preferences are taken in this disclosure regarding the version of USB to be used in conjunction with the disclosed invention, and the term "USB" may further refer to any future version of USB technology or the like.

The term "Fire Wire" refers to the IEEE 1394 serial bus interface standard, offering highspeed communications and isochronous real-time data services, and which is also known as "i.LINK". Although several versions of Fire Wire technology co-exist (Fire Wire 400, Fire- Wire 800 and IEEE 1394B to date), no preferences are taken in this disclosure regarding the version of Fire Wire to be used in conjunction with the disclosed invention, and the term "Fire Wire" may further refer to any future version of Fire Wire technology or the like.

The term "point-of-presence" refers to either an unsupervised or a supervised fixed physical location at which identification takes places.

The term "RFID" refers to radio frequency identification, which is an automatic identification method relying on storing and remotely retrieving data using devices called RFID tags or transponders, which are objects that can be attached to or incorporated into a product,

animal, person and/or other physical objects for the purpose of identification using radio waves.

The term "geographical positioning" refers to reception of signals from which the geographical location of the recipient can be derived, which are transmitted from sources such as but not limited to Global Positioning System (GPS) satellites and/or European Satellite Navigation System GALILEO satellites.

The term "authenticating party" refers to one or more parties that require identification of a person and that control the identification procedure.

The term "victim" refers to a person subject to identification who is subject to physical threat by one or more persons and/or parties.

The term "aggressor" refers to one or more persons and/or parties imposing physical threat upon one or more victims.

The term "continuous" refers to conditions under which a task is repeatedly iterated for an unlimited number of cycles, until the person subject to identification, the authenticating party and/or one or more authorized third parties decide to terminate the iteration.

The term "real-time" refers to conditions under which the duration between consecutive repetitive actions is very small, at maximum no greater than one second, and typically in the order of milliseconds or microseconds.

Brief Summary of the Invention

In accordance with the present invention, the disadvantages of prior identification methods and systems have been overcome.

The present invention comprises a method for identifying a person based upon characteristics of said person's eye(s) wherein varying directions are continuously indicated to said person towards which said person's eye(s) should be turned wherein at least a portion of said directions are unknown prior to the identification procedure and at least a portion of said directions differ from the general direction(s) from which information on said person's eye(s) is captured; information on said portions of said person's eye(s) is continuously captured as

said person's eye(s) turn(s) towards said directions, wherein said information comprises patterns on the exterior and/or interior of said person's eye(s); said patterns are continuously extracted from said captured information; and said person is identified based upon the structure of a synthesis of said patterns.

According to one embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said directions are preferably decided by an authenticating party.

According to another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said authenticating party preferably decides whether adequate information on said person's eye(s) has been captured and if so, said authenticating party preferably decides, based upon information on said person's eye(s) captured and stored at a prior time, whether said captured information corresponds to information expected by said authenticating party to be captured when said person's eye(s) turns towards said directions.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said information is preferably captured in real-time.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said directions are preferably indicated to said person via visual stimuli.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said directions are preferably indicated to said person via auditory stimuli.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said directions are preferably indicated to said person via somatic stimuli.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said information preferably further comprises eye motion characteristics of said person's eye(s).

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), portions of said person's eye(s) are preferably illumi-

nated with a single wavelength or plurality of wavelengths, in such a manner that different portions of said person's eye(s) may preferably be illuminated with different wavelengths, and said wavelengths may varied during the identification procedure depending on the characteristics of different portions of said person's eye(s), specific physiological characteristics of said person's eye(s), external circumstances, and/or other reasons due to which the quality of said information can be enhanced.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), (1) an image capturing apparatus preferably indicates directions for said person to turn his or her eye(s) and captures information on said person's eye(s); and (2) an authentication apparatus preferably analyzes information received from said image capturing apparatus.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said authentication apparatus preferably transmits to said image capturing apparatus said directions to be indicated to said person.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions transmitted from said authentication apparatus to said image capturing apparatus are preferably expressed as guidance data preferably using a range of varying semantics, such as but not limited to: (1) the next target direction, (2) a series of said next target directions, (3) an offset to said next target direction relative to one or more previous directions, (4) a series of said offsets to said next target directions relative to one or more previous target directions, (5) a vector comprising a velocity and direction or path for continuously shifting said target direction, (6) a series of said vectors, and/or (7) any combination of the aforementioned semantics.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), (1) said image data captured from said person's eye(s) preferably constitutes said information transmitted from said image capturing apparatus to said authentication apparatus, and/or (2) characteristics extracted from said image data captured from said person's eye(s) preferably constitute said information transmitted from said image capturing apparatus to said authentication apparatus.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), the identification procedure of said person preferably comprises: (1) said image capturing apparatus being positioned in front of said person's

eye(s); (2) said authentication apparatus determining a new direction or series of new directions to which said person should turn his or her eye(s), and transmitting said direction(s) to said image capturing apparatus; (3) said image capturing apparatus indicating to said person direction(s) to which said person should turn his or her eye(s) and illuminating one or more portions of said person's eye(s), and while said eye(s) turn(s) towards the indicated direction said image capturing apparatus capturing subsequent images of portions of said person's eye(s), and said image capturing apparatus transmitting information captured from said person's eye(s) to said authentication apparatus; (4) said authentication apparatus combining received information into a composite set of information; (5) said authentication apparatus determining whether said composite set of information is adequate for identifying said person, if not, retracing to step 2; (6) said authentication apparatus disclosing the result of said identification procedure and taking possible actions.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), communication between said image capturing apparatus and said authentication apparatus is preferably partially or wholly channeled via (1) one or more cellular networks, in which case communication between said image capturing apparatus and said authentication apparatus is preferably relayed via said person's mobile communication terminal that preferably communicates with said image capturing apparatus either wirelessly, via wire, or via directly docking to an appropriate outlet of said mobile communication terminal; and/or (2) one or more computer networks, in which case communication between said image capturing apparatus and said authentication apparatus is preferably relayed via a network-enabled data terminal, which preferably communicates with said image capturing apparatus either wirelessly, via wire, or via directly docking to an outlet of said data terminal.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), in the case of said person residing in the vicinity of a point-of-presence at which authentication is required, communication between said image capturing apparatus and said authentication apparatus is preferably relayed via said point-of- presence that preferably communicates with said image capturing apparatus either wirelessly, via wire, or via directly docking to an appropriate outlet at said point-of-presence.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said image capturing apparatus is preferably further equipped with an optical system, preferably comprising one or more optical elements, allow-

ing (1) the illuminating portion of said image capturing apparatus to illuminate portions of said person's eye(s) simultaneously from a plurality of angles, preferably as if the sources of illumination were situated on a concave double-curved surface or surfaces in front of said person's eye(s); and (2) the image input view of said image capturing apparatus to view portions of said person's eye(s) simultaneously from a plurality of angles, preferably from the perspective of a concave double-curved surface or surfaces in front of said person's eye(s), and consequently preferably forming one or more three-dimensional images of visible portions of said person's eye(s).

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said directions transmitted from said authentication apparatus to said image capturing apparatus are preferably encrypted by said authentication apparatus and preferably decrypted by said image capturing apparatus.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said captured information on said person's eye(s) transmitted from said image capturing apparatus to said authentication apparatus is preferably encrypted by said image capturing apparatus and preferably decrypted by said authentication apparatus.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said encryption and said decryption preferably exploits one or more of the following digital encryption/decryption keys: (1) a personal digital key unique to said person; (2) a device-specific digital key unique to said image capturing apparatus and/or to a detachable identification component inserted into said image capturing apparatus; (3) a link-specific digital key unique to said person's communication apparatus and/or to a detachable identification component inserted into said person's communication apparatus; (4) a location-specific digital key unique to the general location whereat said person is situated; (5) a unique operator-specific digital key; and (6) a time-code that varies over time.

According to yet another embodiment of said method for identifying a person based upon characteristics of said person's eye(s), said method preferably further comprises a means for reporting abnormal conditions during the identification procedure wherein an alarm signal is preferably generated when abnormal conditions are detected by the image capturing appara-

tus and/or authentication apparatus, and notification of said alarm signal is optionally relayed to one or more third parties.

Although each of the aforementioned preferred embodiments of the present invention is disclosed individually, any combination of one or more of said embodiments is possible within the scope of the present invention.

The present invention further comprises a system for identifying a person based upon characteristics of said person's eye(s) wherein an image capturing apparatus continuously indicates varying directions to said person towards which said person's eye(s) should be turned wherein at least a portion of said directions are unknown prior to the identification procedure and at least a portion of said directions differ from the general direction(s) from which information on said person's eye(s) is captured; information on said portions of said person's eye(s) is continuously captured by said image capturing apparatus as said person's eye(s) turn(s) towards said directions, wherein said information comprises patterns on the exterior and/or interior of said person's eye(s); and said person is identified based upon the structure of a synthesis of said patterns.

According to one embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions are preferably decided by an authenticating party.

According to another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said authenticating party preferably decides whether adequate information on said person's eye(s) has been captured and if so, said authenticating party preferably decides, based upon information on said person's eye(s) captured and stored at a prior time, whether said captured information corresponds to information expected by said authenticating party to be captured when said person's eye(s) turns towards said directions.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said information is preferably captured in real-time.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions are preferably indicated to said person via visual stimuli.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions are preferably indicated to said person via auditory stimuli.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions are preferably indicated to said person via somatic stimuli.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said information preferably further comprises eye motion characteristics of said person's eye(s).

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), portions of said person's eye(s) are preferably illuminated with a single wavelength or plurality of wavelengths, in such a manner that different portions of said person's eye(s) may preferably be illuminated with different wavelengths, and said wavelengths may varied during the identification procedure depending on the characteristics of different portions of said person's eye(s), specific physiological characteristics of said person's eye(s), external circumstances, and/or other reasons due to which the quality of said information can be enhanced.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said system further comprises an authentication database that comprises information on eyes of persons registered in said system.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said system preferably further comprises a security database that comprises digital keys for encryption and decryption of data.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), (1) an image capturing apparatus preferably indicates directions for said person to turn his or her eye(s) and captures information on said person's eye(s); and (2) an authentication apparatus preferably analyzes information received from said image capturing apparatus.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said authentication apparatus preferably transmits to said image capturing apparatus said directions to be indicated to said person.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions transmitted from said authentication apparatus to said image capturing apparatus are preferably expressed as guidance data preferably using a range of varying semantics, such as but not limited to: (1) the next target direction, (2) a series of said next target directions, (3) an offset to said next target direction relative to one or more previous directions, (4) a series of said offsets to said next target directions relative to one or more previous target directions, (5) a vector comprising a velocity and direction or path for continuously shifting said target direction, (6) a series of said vectors, and/or (7) any combination of the aforementioned semantics.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), (1) said image data captured from said person's eye(s) preferably constitutes said information transmitted from said image capturing apparatus to said authentication apparatus, and/or (2) characteristics extracted from said image data captured from said person's eye(s) preferably constitute said information transmitted from said image capturing apparatus to said authentication apparatus.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), the identification procedure of said person preferably comprises: (1) said image capturing apparatus being positioned in front of said person's eye(s); (2) said authentication apparatus determining a new direction or series of new directions to which said person should turn his or her eye(s), and transmitting said direction(s) to said image capturing apparatus; (3) said image capturing apparatus indicating to said person direction(s) to which said person should turn his or her eye(s) and illuminating one or more portions of said person's eye(s), and while said eye(s) turn(s) towards the indicated direction said image capturing apparatus capturing subsequent images of portions of said person's eye(s), and said image capturing apparatus transmitting information captured from said person's eye(s) to said authentication apparatus; (4) said authentication apparatus combining received information into a composite set of information; (5) said authentication apparatus determining whether said composite set of information is adequate for identifying said person, if not, retracing to step 2; (6) said authentication apparatus disclosing the result of said identification procedure and taking possible actions.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), communication between said image capturing apparatus and said authentication apparatus is preferably partially or wholly channeled via (1) one or more cellular networks, in which case communication between said image capturing apparatus and said authentication apparatus is preferably relayed via said person's mobile communication terminal that preferably communicates with said image capturing apparatus either wirelessly, via wire, or via directly docking to an appropriate outlet of said mobile communication terminal; and/or (2) one or more computer networks, in which case communication between said image capturing apparatus and said authentication apparatus is preferably relayed via a network-enabled data terminal, which preferably communicates with said image capturing apparatus either wirelessly, via wire, or via directly docking to an outlet of said data terminal.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), in the case of said person residing in the vicinity of a point-of-presence at which authentication is required, communication between said image capturing apparatus and said authentication apparatus is preferably relayed via said point-of- presence that preferably communicates with said image capturing apparatus either wirelessly, via wire, or via directly docking to an appropriate outlet at said point-of-presence.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said image capturing apparatus is preferably further equipped with an optical system, preferably comprising one or more optical elements, allowing (1) the illuminating portion of said image capturing apparatus to illuminate portions of said person's eye(s) simultaneously from a plurality of angles, preferably as if the sources of illumination were situated on a concave double-curved surface or surfaces in front of said person's eye(s); and (2) the image input view of said image capturing apparatus to view portions of said person's eye(s) simultaneously from a plurality of angles, preferably from the perspective of a concave double-curved surface or surfaces in front of said person's eye(s), and consequently preferably forming one or more three-dimensional images of visible portions of said person's eye(s).

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said directions transmitted from said authentication apparatus to said image capturing apparatus are preferably encrypted by said authentication apparatus and preferably decrypted by said image capturing apparatus.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said captured information on said person's eye(s) transmitted from said image capturing apparatus to said authentication apparatus is preferably encrypted by said image capturing apparatus and preferably decrypted by said authentication apparatus.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said encryption and said decryption preferably exploits one or more of the following digital encryption/decryption keys: (1) a personal digital key unique to said person; (2) a device-specific digital key unique to said image capturing apparatus and/or to a detachable identification component inserted into said image capturing apparatus; (3) a link-specific digital key unique to said person's communication apparatus and/or to a detachable identification component inserted into said person's communication apparatus; (4) a location-specific digital key unique to the general location whereat said person is situated; (5) a unique operator-specific digital key; and (6) a time-code that varies over time.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said system preferably further comprises a means for reporting abnormal conditions during the identification procedure wherein an alarm signal is preferably generated when abnormal conditions are detected by the image capturing apparatus and/or authentication apparatus, and notification of said alarm signal is optionally relayed to one or more third parties.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), the image capturing apparatus preferably is a mobile electronic apparatus.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), the image capturing apparatus preferably is a fixed or detachable part of a mobile and/or cellular handset.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), the image capturing apparatus preferably is a fixed or detachable part of a computer terminal.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), the image capturing apparatus preferably comprises an illumination part; an image capturing part preferably further comprising an image preprocessing part; preferably an optical system; a data processing part; preferably a wireless radio communication part; preferably an external connector part; preferably a local electrical power source part; and preferably a user control part.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said system is preferably used in an environment that lacks external data network and/or cellular network connectivity, in which case said image capturing apparatus preferably communicates directly with a stand-alone authentication apparatus.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s), said system preferably further comprises an electronic portal wherein a user may log into the private area of said electronic portal preferably using the identification method of this disclosure and/or other secure means of identification.

According to yet another embodiment of said system for identifying a person based upon characteristics of said person's eye(s) further comprising an electronic portal, said user may after logging into the private area of said electronic portal preferably carry out various actions, such as but not limited to: (1) monitor and/or modify said user's profile comprising but not limited to information on organizations that accept as valid proof of identity said user identifying himself or herself according to the disclosed identification method; and/or (2) monitor a list of said user's prior identification events.

Although each of the aforementioned preferred embodiments of the present invention is disclosed individually, any combination of one or more of said embodiments is possible within the scope of the present invention.

Disclosure of Certain Advantageous Aspects of Certain Embodiments of the Present Invention

One aspect of certain preferred embodiments of the present invention is that the disclosed identification method is believed to be unaffected by commonly encountered falsification techniques used to produce a false identification result when identifying a person using traditional eye recognition techniques, due to the fact that at least one eye is continuously scanned

while moving towards directions selected by the authenticating party, and the identification procedure is completed no earlier than the authenticating party is content that all necessary information from the eye(s) has been received and verified. Hence, it is believed that the method can be used to provide reliable recognition results under unsupervised conditions, i.e. without the authenticating party and/or one or more trusted third parties monitoring the identification procedure in person.

Another aspect of certain preferred embodiments of the present invention is that the disclosed identification method is believed to be unaffected by illicit interception of data during transmission, due to the fact that only small fragments of information corresponding to a momentary situation during eye motion are transmitted at a time, making it difficult for an imposter to reconstruct a complete and consistent model of the eye(s) subject to analysis, in particular when the imposter has no knowledge on the sequence of motions that the person is instructed to carry out with his or her eye(s), and the guidance data and captured fragments of information on the eye(s) are encrypted.

Yet another aspect of certain preferred embodiments of the present invention is that even in the case that an illicit third party were to successfully intercept a large quantity of transmitted fragments of information that correspond to the entire exterior and/or interior of the eye(s), together with the corresponding guidance data transmitted during the identification procedure, it is believed that achieving false identification using the information would constitute an intricate task, as the imposter would be required to create (1) an exact replica of the interior and/or exterior of the eye(s) that continuously responds to guidance commands in a manner similar to the authentic eye(s), or (2) an apparatus that continuously transmits data to the authenticating party in response to guidance commands in a manner similar to that transmitted by an image capturing apparatus when used in conjunction with the authentic eye(s).

Yet another aspect of certain preferred embodiments of the present invention is that the disclosed identification method is believed to be capable of capturing images of broad areas of the interior and/or exterior of the eye(s), when the person subject to identification turns his or her eye(s) towards the indicated directions.

Yet another aspect of certain preferred embodiments of the present invention is that the disclosed identification method is believed to gather a significant quantity of information on the eye(s) during the identification procedure, and is thus believed to result in more reliable

identification than contemporary iris and/or retinal recognition systems that are believed to base identification conclusions upon a single captured image or a limited-size series of captured images. In the case of such preferred embodiments of the present invention that utilize real-time image capturing, which by definition entail capturing images at very short intervals, the quantity of images per second that can be captured and analyzed by the image capturing apparatus and/or authentication apparatus can reside in the order of thousands, millions, or higher.

Yet another aspect of the present invention is that such preferred embodiments of the present invention that comprise real-time image capturing capabilities are further believed to be capable of capturing detailed information on eye motion characteristics, which is further believed to enhance the reliability of the identification procedure.

Yet another aspect of the present invention is the low quantity of data per captured image transferred from the image capturing apparatus to the authentication apparatus in such preferred embodiments of the present invention wherein the image capturing apparatus first extracts characteristics of the eye(s) prior to transmitting these to the authentication apparatus, in comparison with systems that transmit the captured image data as is without first extracting the characteristics from the image data.

Yet another aspect of the present invention is the low requirement for data storage capacity per registered person within the identification database, due to the extraction of characteristics from the image data and the compiling of these characteristics into a composite set of characteristics requiring less storage capacity than if the characteristics extracted from each captured image were stored separately, in comparison with systems that store the image data as is without first extracting characteristics from the image data and/or compiling the extracted characteristics into a composite set of characteristics.

Yet another aspect of certain preferred embodiments of the present invention is that the disclosed identification method is believed to be considered less intrusive by the user than contemporary retinal scanning systems, as direct eye contact is unnecessary, requiring the person to focus on an alignment light is unnecessary, and having the person stare directly at a light source is minimized or avoided completely, as the light can preferably be shifted in real-time so that it is scarcely or never situated in the exact line-of-sight the eye. In addition, in the case of certain preferred embodiments of the disclosed invention, in which the image capturing apparatus is a personal small-size low-power mobile apparatus used frequently by the

user, it is believed that the familiarity of the apparatus, the sense of ownership of and/or control over the apparatus, the small size and low-power aspects of the apparatus, and experience from prior successful use of the apparatus will further decrease possible anxiety and prejudice towards the identification procedure, in comparison with unfamiliar larger-size and/or high-power fixed installations visibly operated and controlled by a third party.

Yet another aspect of certain preferred embodiments of the present invention is that no prior knowledge or skills, other than the natural human reflex of looking towards a visible light or an illuminated image in an otherwise dark environment, are believed to be required from the person subject to identification. Hence the identification method is believed to be applicable to virtually any person, including infants, elderly, mentally disabled, hearing-impaired and illiterate persons. In addition, in the case of a supervised situation, in which a representative of the authenticating party is present during the identification procedure, no mutual spoken or written language is believed necessary between the representative and the person subject to identification. In the case of the person subject to identification not willing to become identified, detecting intentional misconduct during the identification is believed to be straightforward, as if the person consistently looks away from the indicated direction, it is believed that he or she is obviously attempting to avoid successful identification. Even in such situations a considerable quantity of image information can be gathered from the eye(s) of the uncooperative person. When necessary, the uncooperative person can also be instructed to turn his or her eye(s) towards directions that are irrelevant regarding successful completion of the identification procedure, in which case he or she is expected to look towards other directions that are relevant regarding successful completion of the identification procedure. It is further believed that the uncooperative person can further be forced to turn his or her eye(s) by directing bright light into the current line-of-sight of the eye(s).

Yet another aspect of certain preferred embodiments of the present invention is that the disclosed identification method can be further enhanced by verifying the liveness and authenticity of the eye(s) of the person subject to identification using various liveness verification techniques, such as but not limited to: (1) illuminating the eye(s) using various wavelengths and analyzing the captured images of the eye(s) for coaxial retinal back-reflection; (2) analyzing reflections from corneal and lens surfaces; (3) analyzing involuntary eye behavior, such as but not limited to pupillary unrest and pupillary light reflex; (4) initiating and analyzing voluntary eye behavior, such as but not limited to eye motions on command; and (5) analyzing other well-known characteristics of an authentic and live eye.

Yet another aspect of certain preferred embodiments of the present invention is that when further enhanced by augmenting the image capturing apparatus with an optical system, the eye(s) can be illuminated simultaneously from a plurality of angles, preferably as if the sources of illumination were situated on a concave double-curved surface or surfaces in front of the eye(s), and the image capturing apparatus can view portions of the eye(s) simultaneously from a plurality of angles, preferably from the perspective of a concave double-curved surface or surfaces in front of the eye(s), and consequently forming three-dimensional images of visible portions of the eye(s).

Yet another aspect of certain preferred embodiments of the present invention is that the disclosed identification method can further be augmented with an alarm mechanism for detecting abnormal conditions during the identification procedure, in which case predetermined eye motions, preferably the victim consistently looking away from the indicated direction(s), will trigger an alarm signal, warning third parties of abnormal conditions during the identification procedure, such as the victim being subject to physical threat by an aggressor. Depending on the circumstances and application for which the identification procedure is used, the alarm can be (1) noticeable to the victim and aggressor resulting in visible, audible and/or physical actions, such as doors locking or warning lights and sirens being activated, or (2) silent, in which case triggering of the alarm signal remains unknown to the aggressor. When said image capturing apparatus is firmly pressed against the area of the victim's face surrounding his or her eye(s) and an alarm signal is triggered for example by consistently looking away from the direction(s) indicated to the victim, the disclosed method provides a secure means of triggering a silent alarm imperceptible by the aggressor. This is the case even if the aggressor is aware that the victim consistently looking away from the indicated directions) will trigger an alarm and the aggressor has means of observing the eye motions of the victim during the identification procedure, for example by observing the movements of the second eye in the case that this is not pressed against the image capturing apparatus, as the directions to which the victim should turn his or her eye(s) are unknown both to the aggressor and the victim prior to the identification procedure and thus the directions that trigger the alarm are also unknown in advance, and consequently the aggressor has no means of determining whether the victim has looked into the correct directions or triggered a silent alarm. In addition, the victim may be notified of successfully triggering the silent alarm via the image capturing apparatus, for example by flashing a sequence of lights. Security can be further increased when all lights and indicators are deactivated as soon as the image capturing apparatus is removed from in front of the victim's eye(s), for example by using a mechanism for

detecting physical contact between the image capturing apparatus and the area of the victim's face surrounding his or her eye(s) and/or by analyzing the captured image sequence.

Yet another aspect of certain preferred embodiments of the present invention is that the image capturing apparatus can be implemented as a small-size lightweight mobile electronic apparatus, which is easily carried by the user, for example in the user's pocket and/or as a keyring or the like.

Yet another aspect of certain preferred embodiments of the present invention is that the image capturing apparatus may further be used as part of a stand-alone identification system in an isolated environment that lacks external data network and/or cellular network connectivity, such as but not limited to a building, gate, vehicle, vessel, aircraft or mobile weapon system. In this case the image capturing apparatus communicates via a local wireless data network with a stand-alone authentication apparatus.

Yet another aspect of certain preferred embodiments of the present invention is that in the case that the image capturing apparatus of the person is stolen, misplaced or otherwise absent, the person may alternatively use another image capturing apparatus to authenticate himself or herself and gain appropriate access.

Yet another aspect of certain preferred embodiments of the present invention is that the required quantity and/or quality of information to be gathered on the eye(s) of the person subject to identification during the identification procedure may be varied according to the purpose of the identification procedure. For example, when a person is identifying himself or herself in order to purchase an inexpensive item, the identification procedure can be brief, capturing only a very limited number of images of the person's eye(s). For more critical applications in which reliability requirements are more stringent, such as but not limited to airport security, nuclear power plant access and/or strategic military applications, a greater quantity of higher-quality information can be gathered during a lengthier identification procedure before confirming the identity of the person subject to identification.

Yet another aspect of certain preferred embodiments of the present invention is that the image capturing apparatus may be personalized according to the preferences of the user. For example, the casing of the image capturing apparatus may comprise images, text, colors and/or patterns desired by the user, and the casing may be cast into various forms and/or shapes, the preferred casing which is selected by the user. Furthermore, the image capturing apparatus may display one or more illuminated images to the user when indicating the direc-

tions towards which he or she should look during the identification procedure, the topics, styles and/or themes of which are according to the user's preferences; for example, the image capturing apparatus of an infant or child could display illuminated images of cartoon characters. Organizations can also benefit from personalization of the image capturing apparatus. For example, an organization could provide its existing and/or potential clients with image capturing apparatus free-of-charge or at an inexpensive price, and brand the image capturing apparatus by personalizing the casing and images displayed to the user according to its corporate image, products and/or other promotional themes. Organizations could also purchase the right to use batches of manufactured image capturing apparatus as an advertising media by branding these similarly.

Disclosure of Fundamental Principles: Prior Art vs. Present Invention

Below are descriptions of what is believed to constitute the basic principles of contemporary iris and retinal recognition methods and systems, as well as the basic principles of the certain preferred embodiments of the present invention.

Fig. 1 illustrates a generic sequence of events that are believed to take place during the identification procedure of contemporary retinal recognition systems. In step 100, the person subject to identification is believed to be required to align his or her eye(s) with an indicator shown in an image capturing apparatus. In step 101, the image capturing apparatus is believed to illuminate the retina(s) of the eye(s) of the person. In step 102, the image capturing apparatus is believed to capture one or more images of the retina(s). In step 103, the image capturing apparatus is believed to transmit the captured image data to an authentication apparatus. In step 200, the authentication apparatus is believed to analyze the received image data. In step 201, the authentication apparatus is believed to determine whether the quality of the image data is adequate for reaching reliable conclusions on the identity of the person; if so, the authentication apparatus is believed to determine the result of the identification procedure in step 202 and take possible further actions; if not, the authentication apparatus is believed to, in certain systems, request new image data from the image capturing apparatus in step 203.

Fig. 2 illustrates a generic sequence of events that are believed to take place during the identification procedure of contemporary iris recognition systems. In step 100, an image capturing apparatus is believed to, in certain systems, illuminate the iris(es) of the eye(s) of the person. In step 101, the image capturing apparatus is believed to capture one or more images of

the iris(es). In step 102, the image capturing apparatus is believed to transmit the captured image data to an authentication apparatus. In step 200, the authentication apparatus is believed to attempt to locate the iris(es) from the received image data, remove eyelids and eyelashes from the image data, and analyze the image data. In step 201, the authentication apparatus is believed to determine whether the quality of the image data is adequate for reaching reliable conclusions on the identity of the person; if so, the authentication apparatus is believed to determine the result of the identification procedure in step 202 and take possible further actions; if not, the authentication apparatus is believed to, in certain systems, request new image data from the image capturing apparatus in step 203. Certain known iris recognition systems are believed to further comprise one or more steps to align the person's eye(s) with the image capturing apparatus prior to step 100.

Fig. 3 illustrates a generic sequence of events that take place during the identification procedure according to certain preferred embodiments of the present invention. In step 100, the image capturing apparatus indicates to the person direction(s) towards which his or her eye(s) should be turned. In step 101, the image capturing apparatus illuminates portions of the person's eye(s). In step 102, as the person's eye(s) turn(s) towards the indicated directions), the image capturing apparatus continuously captures fragments of information on the eye(s). In step 103, the image capturing apparatus transmits the fragments of information to the authentication apparatus, after which the image capturing apparatus retraces to step 100. In step 200, the authentication apparatus analyzes the fragments of information upon receipt from the image capturing apparatus. In step 201, the authentication apparatus determines the next area(s) of the eye(s) to analyze, if any. In step 202, the authentication apparatus transmits, in the form of guidance data, the next direction(s) to the image capturing apparatus. In step 203, the authentication apparatus determines whether adequate information on the person's eye(s) has been gathered in order to take a decision upon the result of the identification procedure; if not the authentication apparatus retraces to step 200; if so, the authentication apparatus transmits the result of the identification procedure to relevant parties and terminates the identification procedure in step 204.

As shown in Fig. 1 and Fig. 2, contemporary iris and retinal recognition systems are believed to carry out the identification procedure sequentially: one or more images are captured; capturing may be repeated if the quality of capturing image(s) is inadequate; and once acquired, the images are analyzed for authentication purposes.

Fig. 3 showed the sequence of events according to certain preferred embodiments of the present invention, in which the identification procedure is a continuous iterative process: directions are continuously indicated to the person; information is continuously captured; the information is continuously analyzed by the authenticating party; and the process continues until the authenticating party reaches a conclusion regarding the person's identity.

A continuous iterative process is essentially a fundamentally different concept than a sequential series of events. Several advantageous aspects of a continuous iterative eye recognition process, in contrast to contemporary sequential eye recognition techniques, are described in this disclosure.

Disclosure of Detailed Differences: Prior Art vs. Present Invention

In the following is a detailed disclosure of certain fundamental differences between the present invention according to independent claims 1 and 6 and each of the aforementioned inventions.

The contents of claim 1 comprising a method for identifying a person and claim 6 comprising a system for identifying a person are largely similar and comprise: (1) a method/system for identifying a person; (2) the method/system is based upon characteristics of the person's eye(s); (3) varying directions are indicated to the person towards which he or she should turn his or her eye(s); (4) the indication of the directions is repeated continuously; (5) at least a portion of the directions are unknown prior to commencement of the identification procedure; (6) at least a portion of the directions differ from the general direction(s) from which information on the person's eye(s) is captured; (7) information on portions of the person's eye(s) is captured; (8) the capturing of information is repeated continuously; (9) the capturing of information takes place while the person's eye(s) turn(s) towards the indicated directions; (10) the captured information comprises patterns on the exterior and/or interior of the person's eye(s); (11) patterns are extracted from captured information; (12) extraction of patterns is repeated continuously; and (13) identification of the person is based upon the structure of a synthesis of the patterns.

The inventions according to United States patent US4641349 by Leonard Flom and Aran Safir [1], and United States patent US5291560(A) by John G. Daugman [2], are believed to disclose the basic methodology of iris recognition. The 1994 publication [3] and United States patent US5572596(A) [4] by Wildes et al. are believed to comprise an eye alignment step, an image capturing step and an authentication step, which are arranged sequentially and

more specifically are not arranged as a continuous process. United States patent

US5956122(A) by Rodney Doster [5] is believed to disclose a method and apparatus for identifying a person from a distance based upon capturing an image(s) of his or her iris(es) and using light reflected from the retina to determine the position of the iris. The difference between these inventions and the present invention is believed to be obviously to those skilled in the art.

In the following are described and scrutinized known methods and systems believed to comprise the capturing of multiple iris images.

South Korean patent application KR2001-0006976 by Dae H. Kim, Jun Y. Park and Jang S. Ryu [6] is believed to disclose an iris recognition system based upon capturing a single image and an animation, the duration of which is a few seconds and that is used to study the movement type of the autonomous nerves ring.

United States patent application US2005/0281440(Al) by Frederick A. Perner [7] is believed to disclose an iris feature detector wherein the liveness of the eye is verified by capturing one image of the iris when the eye is first focused on one light source, and then showing a second light source and capturing a second image of the iris, and analyzing whether the eye has turned as expected by studying the difference between the two images.

Japanese patent application JP2004-220376(A) by Tsutomu Nakazawa and Yoshio Ichihashi [8] is believed to disclose a method for executing the security management of equipment, in which an image of an eye is first captured, the position of the iris is detected, the detected iris position is tracked to acquire further movement data, and the movement data is collated with predetermined reference data.

United States patent application US2006/0274919(Al) by Dominick Loiacono and James R. Matey [9] is believed to disclose a method and apparatus for obtaining iris biometric information, and in particular a system comprising an array of cameras and an image processor for determining at least one suitable iris image for further processing.

South Korean patent application KRl 0-2005-0009959(A) by Kwang H. Bae, Jai H. Kim and Kang R. Papk [10] is believed to disclose a method and device for recognizing an iris from all directions using a wide-angle camera that tracks and captured a plurality of images of the eye(s) of a moving person in real-time.

Japanese patent JP2007-11710(A) by Yuichiro Takahashi [11] is believed to disclose an authentication device that captures a series comprising multiple authentication data, such as information on the iris and/or retina, from the person to be authenticated wherein the input conditions are modified possibly in a random manner, and it can be determined whether the acquired authentication data correlates as expected with pre-registered data when taking the modified input conditions into account.

PCT international application WO2006/088042(Al) by Shinichi Tsukahara [12] is believed to disclose a method and device in which a series of LEDs are organized around a camera, and an image is captured as each LED is illuminated. Based upon the captured images, it is determined whether the eye is authentic, before the actual iris recognition procedure.

United States patent US2003/0012413(Al) by Takashi Kusakari and Koji Wakiyama [13] is believed to disclose an identification apparatus that captures two images of the iris under different lighting conditions, and a different pupil size is required between the two images prior to proceeding to authentication.

Japanese patent application JP2000-33080(A) by Takahiro Oda [14] is believed to disclose an iris recognition system in which a plurality of light sources are directed in a random sequence to the pupil of the eye of the person to be authenticated, a plurality of images of the eye are captured, the images are analyzed, and if a biogenic response by the eye is detected in response to the light, the eye is determined authentic.

It is believed that the aforementioned inventions are best characterized as methods and/or systems, according to which the step comprising eye alignment and image capturing, and the step of authentication based upon one or more acquired images, are arranged sequentially and more specifically are not arranged as a continuous process. It is further believed that eye guidance, when present in some of these inventions, is for the purpose of aligning the eye(s) of the person to be authenticated towards the imaging device, and thus does not constitute an additional security measure.

In the following are described and scrutinized known methods and systems believed to comprise the capturing of multiple iris images and synthesizing these into one or more composite images.

PCT international application WO2005/008590(Al) by Takeo Azuma, Kenji Kondo and Masahiro Wakamori [15], and United States patent US2005/0249385(Al) by Kenji Kondo

and Takeo Azuma [16], are believed to disclose an authentication method and system that captures a series of iris images and then synthesizes these to form a single iris image for authentication purposes.

PCT international application WO2005/109344(Al) by Kenji Kondo, Takeo Azuma and Masahiro Wakamori [17] is believed to disclose a method and device that captures a series of iris images, and based upon these synthesizes a single iris code for authentication purposes.

French patent FR2884947(A1) by Martin Cottard and Gilles Monteilliet [18] is believed to disclose a system for capturing a three-dimensional image of the iris of the eye using one or more image sensors.

The 2006 publication by Byungjun Son, Sung-Hyuk Cha, and Yillbyung Lee [19] is believed to disclose an iris recognition system using image sequences instead of single still images for recognition, in which image sequences are captured at different focus levels, enabling recognition from defocused iris images and thus reducing the requirement for highly focused sharp iris images.

United States patent application US2003/0152252(Al) by Kenji Kondo, Takeo Azuma, Kenya Uomori and Yoshito Aoki [20] is believed to disclose an authentication method and apparatus, in which a plurality of iris images are captured using a plurality of cameras, which are then synthesized to form an accurate iris image even when extraneous light reflection is present.

It is believed that the aforementioned inventions are best characterized as methods and/or systems, according to which the step comprising eye alignment and image capturing, and the step of authentication based upon one or more acquired images, are arranged sequentially and more specifically are not arranged as a continuous process. It is further believed that eye guidance, when present in some of these inventions, is for the purpose of aligning the eye(s) of the person to be authenticated towards the imaging device, and thus does not constitute an additional security measure.

In the following are described and scrutinized known methods and systems believed to comprise mechanisms for guidance of the eye prior to and/or while capturing one or more images of the iris.

Japanese patent JP2003-108983(A) [21] and United States patent US2006/0120707(Al) [22] by Takashi Kusakari, Jyoji Wada and Tamotsu Kaneko relates to an "eye image pickup apparatus, iris authentication apparatus and portable terminal device having iris authentication function". In the disclosure Kusakari et al. states: "With this configuration, when a person to be authenticated looks at the guide character displayed on the display means, his or her eye is guided to the optimal location and at the optical distance, so that ... (a) camera ... can obtain an enlarged iris image appropriate for the iris authentication process." and further "It is preferable that the guide character be displayed with being superimposed with the image of the eye obtained by the pickup camera. With this configuration, the person to be authenticated can easily and intuitively identify where his or her eye should aim, and can move his or her eye with the guide character as a target." It is thus believed that the invention disclosed by Kusakari et al. is best characterized as a system in which the person to be authenticated and/or the eye of the person is guided towards the imaging apparatus by means of a guidance character, and that the eye of the person is continuously photographed in order to intuitively determine the direction towards which the person should move his or her eye and indicate the direction to the person by means of the guide character, possibly together with an image of the eye. It is also believed that during the guidance step one or more images of the iris of the eye can be acquired, and when a satisfactory iris image has been acquired, the guidance process is terminated and the person is identified by the iris image.

Consequently, it is further believed that the invention disclosed by Kusakari et al. is best characterized as a system comprising an image capturing step wherein the eye of the person to be authenticated is continuously guided towards the optimal position for obtaining a focused image of the iris and the eye is continuously photographed until it is decided that a focused image of the iris has been acquired, and an authentication step wherein the acquired iris image is used for identifying the person.

It is also believed that although according to the invention by Kusakari et al. the exact directions to be displayed to the person are not known prior to the identification procedure, it is known that all directions are intended to direct the line-of-sight of the eye towards the center of the lens of the imaging device, and consequently variation of directions during the image capturing step does not constitute an additional security measure.

Kusakari et al. further state repeatedly: "...the person to be authenticated can be notified that the image appropriate for the iris authentication process is obtained." It is thus believed that although a plurality of images of the eye may be captured during the image capturing step

according to the invention by Kusakari et al., the majority of these are captured for the purpose of guiding the eye to the optimal position for photographing the iris of the eye, and the actual identification of the person is based upon a single focused iris image. In Fig. 4, the invention by Kusakari et al. is considered firstly from the perspective of images acquired and used for the purpose of eye guidance, and secondly from the perspective of the focused iris image acquired and used for identifying the person.

Japanese patent application JP2005-157970(A) [23], United States patent US685O631(B1) [24], Canadian patent CA2350309(Al) [25], PCT international application WO00/30525(A2) [26], PCT international application WO01/20561(Al) [27] and United States patent US6333988(B1) [28] are also believed to comprise various eye guidance mechanisms and the capturing of one or more images of the iris for authentication purposes, as described and scrutinized below.

Japanese patent application JP2005-157970(A) by Takeshi Fujimatsu and Yoshito Aoki [23] is believed to disclose an iris recognition system, in which the first eye of the person is guided to an appropriate position by continuously capturing an image of the first eye and using a display unit to show a degraded version of the captured image to the second eye, after which a single image of the first eye is photographed for eye recognition purposes, and in the case that the quality is inadequate, the procedure is repeated.

United States patent US685O631(B1) by Takahiro Oda and Yuji Ohta [24] is believed to disclose an eye alignment mechanism similar to the aforementioned was disclosed in the case of a portable photographic device further attached to a monitor displaying the image of the eye as seen by the photographic device.

Canadian patent CA2350309(Al) by James L. Cambier and Clyde Musgrave [25] is believed to disclose a method for unlocking a telecommunications device responsive to the identification of a person comprising sensors and indicators for the person to achieve proper alignment and in which an image of the iris is repeatedly captured until the image is determined to be of sufficient quality for extraction of characteristics, after which the image is used for authentication of the person.

PCT international applications WO00/30525(A2) by James T. McHugh, James H. Lee and Cletus B. Kuhla [26], and WO01/20561(Al) by James L. Cambier and John E. Siedlarz [27]

are believed to disclose an iris imaging apparatus for identification of a person, comprising a means for displaying an image of the iris back to the person and a passive feedback mechanism for assisting the person in aligning his or her eye correctly for capturing an image of the iris, and which repeatedly captures images of the iris until an image of sufficient quality is obtained.

United States patent US6333988(B1) by Christopher H. Seal, Gifford M. Merrick and David J. McCartney [28] is believed to disclose a handheld imaging apparatus comprising several alternative means for achieving correct alignment of the eye in respect to the camera, including a line-of-sight beyond the apparatus, crosshairs, and video display. Once the eye is correctly aligned, the apparatus captures one or more images, analyzes these, and repeats the capturing procedure if the quality of these is inadequate.

It is believed that the aforementioned inventions are best characterized as methods and/or systems, according to which the step comprising eye alignment and image capturing, and the step of authentication based upon one or more acquired images, are arranged sequentially and more specifically are not arranged as a continuous process. It is further believed that eye guidance according to these inventions is for the purpose of aligning the eye(s) of the person to be authenticated towards the imaging device, and thus does not constitute an additional security measure. In Fig. 4, the aforementioned inventions are considered firstly from the perspective of images acquired and used for the purpose of eye guidance, and secondly from the perspective of the iris image(s) acquired and used for identifying the person.

The invention disclosed in Japanese patent JP2006-85226(A) by Motoko Tachibana [29] relates to a "fraudulent detection system in iris authentication". In the disclosure Tachibana states: "In this case, the iris authentication system is provided with; a line-of-sight random guiding means which guides a person's line of sight so that it may randomly moves before photographing his or her irises; a line-of-sight action correctness determination means which determines ... whether a line-of-sight action ... executes such a correct action as guided; and a photographing object correctness determination means which determines ... whether a photographing object is a correct eyeball."

It is thus believed that the invention disclosed by Tachibana is best characterized as a system in which the eyeball of the person to be authenticated is guided towards random directions and based upon line-of-sight variations it is determined whether the eyeball is an authentic

eyeball. Based upon further content in the disclosure by Tachibana, it is also believed that in the case that the eyeball is determined authentic, the iris is photographed one or two times, and the photographed images are used for iris authentication purposes.

It is thus further believed that although the eyeball could possibly be continuously photographed for line-of-sight detection purposes during the eye guidance step, possible photographs acquired during this step are not used for authentication purposes, and actual identification of the person is based upon the one or two images acquired in a later iris photographing step. Hence, it is believed that the eye guidance step, the image acquiring step and the authentication step are arranged sequentially according to the invention by Tachibana, and more specifically not arranged as a continuous process. In Fig. 4, the invention by Tachibana is considered firstly from the perspective of the line-of-sight detection step, and secondly from the perspective of the iris capturing step.

The invention disclosed in United States patent US2005/0129286(Al) by Christopher D. Hekimian [30] relates to a "technique using eye position and state of closure for increasing the effectiveness of iris recognition authentication systems". In the disclosure Hekimian states: "The new invention, is unique ... due to the distinguishing characteristics of ... requirement that the multiple iris scanning procedures be made up of a preestablished sequence of images of the eye which can include ... the eye directed into different directions ... (and the) ability to require that the authentication server maintain registries of ... image order data." and further "In effect, a new, additional layer of security is associated with the new technique which is in the form of the extended authentication template consisting of an ordered sequence of images of the authentication candidates eye region. The ordered sequence of eye movements constitutes a kind of password, contained implicit with the iris imaging process." It is thus believed that the invention by Hekimian is best characterized as a technique in which predetermined eye movement sequences are used as a supplementary security means in addition to iris recognition, and that the sequence of directions towards which the person to be authenticated should turn his or her eye is not unknown prior to the identification procedure according to the invention by Hekimian. It is further believed that the invention by Hekimian is based upon verifying eye movement sequences of finite duration, and consequently the identification procedure is best characterized as sequential rather than a continuous process, a mention of which cannot be found in the disclosure.

Hekimian further states: "An intruder ... could not gain access to a secured system unless that intruder knew the pattern of ... eye movements that were associated with a control template

established by the true, authorized user." It is thus believed that that the invention by Heki- mian is best characterized as a technique wherein the person to be authenticated must personally have knowledge of the correct sequence of eye movement in order to become successfully authenticated, and which does not comprise a means for eye guidance, due to the fact that such means would contradict the fundamental requirement of being able to verify the person's knowledge regarding the correct eye movement sequence.

Hekimian also states: "With the new invention, a potential intruder ... would conceivably, have to know ... when to look right, left, up, down, and into the lens(es) of the imaging device^), throughout the entire authentication sequence." It is thus believed that the semantics of "eye movements" according to the invention by Hekimian is best characterized as relating to "eye(s) turning towards approximate directions" rather than towards specific rotational bearings.

Japanese patent JP2006-158827(A) by Masahiro Wakamori [31], and Japanese patent JP2006-181012(A) by Masahiro Wakamori and Kaoru Morita [32], are believed to disclose an authentication method and device equipped with a guidance mechanism comprising an arrangement of LEDs for instructing the person to be authenticated to move towards a predetermined position for correct alignment prior to capturing an iris image. It is thus believed that the inventions by Wakamori are best characterized as methods and devices for guiding the person to move towards a specific position prior to the image capturing step, and more specifically not as methods and devices for guiding the person's eyes to turn towards specific directions.

PCT international application WO2006/052004(Al) by Shinichi Tsukahara [33] is believed to disclose an iris authentication device comprising an iris-capturing portion and two saccade indexes that direct the line-of-sight of the person to be authenticated, in which the line-of- sight is expected to change between the first and second saccade indexes as a biological reaction. It is thus believed that the invention by Tsukahara is best characterized as an iris authentication device capturing a single iris image for authentication purposes and further verifying the authentic variation of line-of-sight using two saccades indexes, and more specifically not as a continuous process of guiding the person's eye towards specific directions and continuously capturing images and analyzing these.

In the following are described and scrutinized known methods and systems believed to comprise retinal recognition.

United States patent US4109237 by Robert B. Hill [34] is believed to disclose a method and apparatus wherein a beam of light is focused on a small spot of the retina and the beam is scanned in a circular pattern to generate an analog signal representing the vascular structure of the eye intersecting the circular path of the scanned beam.

United States patent US4393366 by Robert B. Hill [35] is believed to disclose a method and apparatus for acquiring and recording an identification pattern from the reflectance of the fundus of an eye wherein a single light source is used to produce a columnar source beam of infrared light directed into the fixated eye from a plurality of positions.

United States patent US4620318 by Robert B. Hill [36] is believed to disclose a method and apparatus wherein a circle centered on the fovea is scanned with an infrared light source, and the intensity of the radiation reflected from the eye at each sequential location constitutes an identification pattern.

United States patents US6453057(Bl) [37] and US6757409(B2) [38] by John Marshall and David Usher are believed to disclose a method comprising capturing an image of an area of the retina including the optic disk, locating the optic disk in the image and generating a unique and consistent signal pattern for identifying the person.

United States patent application US2002/0093645(Al) by Gregory L. Heacock [39] and PCT international application WO02/075639(Al) by Gregory L. Heacock and David F. Mϋller [40] are believed to disclose a system in which the eye is first aligned along a predetermined axis of the system, a predetermined area of the retina is illuminated, and an image of the area is acquired that includes the optic disk.

PCT international application WO02/07068(Al) by D. Beghuin, P. Chevalier, D. Devenyn, K. Nachtergaele, and J.-M. Wislez [41] is believed to disclose an authentication device wherein at least a partial area of the retina is illuminated and an image thereof captured, and the device optionally further comprises positioning and eye fixation means for correctly aligning the pupil of the eye prior to capturing the image.

United States patent application US2004/0202354(Al) by Takayoshi Togino [42] is believed to disclose an imaging and identity authentication system wherein plurality of minute imaging optical systems are arranged in rows and columns to acquire images of a divided fundus through separate imaging optical systems, so that a fundus image is synthesized for authentication purposes.

It is believed that the aforementioned inventions are best characterized as methods and/or systems, according to which the step comprising eye alignment, scanning and/or image capturing, and the step of authentication based upon one or more scans and/or acquired images, are arranged sequentially and more specifically are not arranged as a continuous process. It is further believed that eye guidance, when present in some of these inventions, is for the purpose of aligning the eye(s) of the person to be authenticated towards the imaging device, and thus does not constitute an additional security measure.

United States patent application US2006/0147095(Al) by David B. Usher, Gregory L. Hea- cock, John Marshall and David F. Mϋller [43] is believed to disclose a method and system comprising capturing multiple frames of an image of the interior of the eye, determining whether the captured image frames is sufficient to provide data for identifying an individual or animal before attempting to generate the identification data, and if not, automatically capturing another set of multiple image frames of the interior of the eye. It is thus believed that the above invention disclosed by Usher et al. is best characterized as a method and system comprising a first continuous image capturing step where a set of multiple image frames is repeatedly captured until a set of multiple image frames of adequate quality is acquired, a second synthesis step, in which a composite enhancement bit map of the captured image frames is formed, and a third feature extraction step wherein a vessel pattern in the retina with respect to the optic disk is detected from the composite bit map, which is then used to generate identification data, and it is further believed that the above steps are arranged sequentially and more specifically are not arranged as a continuous process. It is also believed that the above invention by Usher et al. does not comprise any form of eye guidance mechanism.

In the following are described and scrutinized known methods and systems believed to comprise both iris and retinal recognition.

United States patent application US2006/0088193(Al) by David F. Mϋller, Gregory L. Hea- cock and David B. Usher [44] is believed to disclose a method and system generating correlated biometric information by capturing an first image of a retinal vessel pattern and within a short time period capturing a second image of an iris minutia pattern, generating retinal and iris biometric data from the images, and then combining or linking the data. It is thus believed that the above invention disclosed by Mϋller et al. is best characterized as a method and system comprising a first retinal image capturing step, a second iris image capturing step, and a third feature extraction step wherein biometric data is extracted from the images,

and a fourth synthesis step wherein the retinal biometric data and the iris biometric data are combined or linked to maintain the correlation between the two biometrics, and it is further believed that the above steps are arranged sequentially and more specifically are not arranged as a continuous process. It is also believed that the above invention by Mϋller et al. does not comprise any form of eye guidance mechanism. It is thus believed that the above invention disclosed by Miiller et al. is best characterized as a system comprising a first eye guidance step for correctly aligning the eye of the person towards the imaging device, and a second image capturing step,

United States patent application US2005/0117782(Al) by Takuya Imaoka, Jyoji Wada and Toshiaki Sasaki [45] disclose an "eye image pick-up system that has an eye position guiding device for guiding an position of an eye to be shot". It is thus believed that the invention by Imaoka et al. does not define which portion of the eye is captured, and it is possible that an image of the iris and/or retina could be captured according to the present invention. The invention by Imaoka et al. is believed to further comprise an eye guidance means arranged concentrically round the optical axis, which assists the person to guide the position of his or her eye onto the optical axis, and it is further believed that the above steps are arranged sequentially and more specifically are not arranged as a continuous process. It is further believed that eye guidance means according to the invention by Imaoka et al. is for the purpose of aligning the eye of the person towards the imaging device, and thus does not constitute an additional security measure.

Japanese patent application JP2006-350410(A) by Yoshito Aoki [46] is believed to disclose an eye image photographing device and authentication device equipped with a guide mechanism, comprising a mirror showing the person to be authenticated a reflected image of his or her eyes, allowing the person to confirm the positions of his or her eyes and adjust his or her position accordingly when necessary. It is believed that the invention by Aoki is best characterized as a system wherein the person is assisted in determining the correct position for his or her eyes by means of a reflected image from a mirror, prior to capturing an image of his or her eyes, and it is thus further believed that no indication of directions to the person is present. Further eye recognition methods and systems comprising a similar guidance mirror means are also known, but are omitted from this disclosure.

Japanese patent application JP2005-304809(A) by Masaru Ikoma and Tomoyoshi Nakai- gawa [47] is believed to disclose "an eye image capturing apparatus with illuminating devices" comprising "a capturing unit which captures an eye image", "a plurality of illuminat-

ing devices which illuminate the eye" and "a control unit which controls sequentially illuminations of the illuminating devices". It is believed that Ikoma and Nakaigawa do not define a definite sequence or algorithm for illumination of the illuminating devices. Fig. 4 together with its description is believed to disclose a first embodiment of the invention comprising a first sequential image capturing step wherein each illuminating device is illuminated sequentially and a second authentication step and Fig. 7 together with its description a second embodiment of the invention comprising a similar first image capturing step, a second synthesis step in which partially eclipsed images are combined to form a single iris image and a third authentication step. Because all above embodiments disclosed by Ikoma and Nakaigawa are believed to be structured sequentially and it is believed that no mention of any kind was made regarding a non-sequential process, such as a continuous process, it is believed that the invention disclosed by Ikoma and Nakaigawa is best characterized as a sequential identification procedure, and more specifically not as a continuous process.

It is further believed that the above invention by Ikoma and Nakaigawa does not comprise any form of eye guidance mechanism.

In the following are described and scrutinized known methods and systems believed to comprise eye movement recognition.

Pawel Kasprowski and Jozef Ober described an experiment using eye motion analysis as the basis for identifying a person in a 2004 article [48]. It is believed that each person in the experiment was shown a moving dot for a period of time, the eye motions of the person were recorded, and identification of each person was attempted based upon his or her eye motion characteristics.

PCT international application WO01/88857(Al) by Eric Lauper and Adriano Huber [49] is believed to disclose a biometric method for identification and authorization of a person based upon analysis of involuntary eye movements when the person views an image.

It is believed that although when identifying a person according to the above inventions, images the eye(s) of the person may or may not be continuously captured for line-of-sight determination purposes, identification of the person is albeit based upon the movement characteristics of the person's eye, and more specifically not upon the structure of the patterns on the exterior and/or interior of the eye.

In the following is described and scrutinized a known method and system believed to further comprise combined conjunctiva and iris recognition.

The invention disclosed in PCT international application WO2006/119425(A2) by Reza Derakhshani and Aran Ross [50] relates to "methods and apparatus for recognition of the physical characteristics of veins of the sclera of the human eye that are visible through the conjunctival membrane". Derakhshani and Ross state: "In yet another aspect, successful spoof attacks can be minimized through verification of transition times between different conjunctival poses." and "In yet another aspect of the invention, by introducing extra capture areas of the conjunctiva of the eyeball ... the threat of spoof attacks can be reduced. In such systems the attacker will have to reproduce ... different registered regions of the conjunctiva in the sequence that the identification system requires. Such sequencing could be established on a random basis." It is thus believed that the invention by Derakhshani and Ross relates to the person to be identified consecutively turning his or her eyes in a sequence required by the identification system, which could be a random sequence.

Derakhshani and Ross later state: "Fig. 3 is a flow diagram of a method 100 which could be implemented ... in accordance with the invention.", "Fig. 4A and 4B taken together illustrate a method 140 which includes additional details of vascular scanning and processing in accordance with the invention." and "Fig. 5 provides additional details relative to the image or data acquisition step." Because all above embodiments disclosed by Derakhshani and Ross are believed to be structured sequentially and it is believed that no mention of any kind was made regarding a non-sequential process, such as a continuous process, it is believed that the invention disclosed by Derakhshani and Ross is best characterized as a sequential identification procedure, and more specifically not as a continuous process.

Derakhshani and Ross further state in claim 12: "A method ... which includes acquiring selected eye movement sequences of the individual's eyeball.", in claim 13: "A method ... which includes evaluating at least some of the sequences for the presence of...", in claim 17: "A method ... where carrying out includes for enrollment, storing at least some of the sequences with individual identifying indicia." and in claim 21: "A method ... which includes comparing at least some of the stored sequences with acquired eye movement sequences of the individual's eyeball." It is thus believed that according to the invention by Derakhshani and Ross, eye movement sequences of a person are acquired and stored in an enrollment phase, which are compared against eye movement sequences acquired from the person when the person is later to be identified. It is further believed that because the eye movements are

first stored as ordered sequences in the enrollment phase and compared against eye movement sequences acquired in the matching phase, the order to the eye movement to be required from the person by the identification system in the matching phase has already been fixed at the time of enrollment. It is believed that this fixed ordering within eye movement sequences implied in claims 12, 13, 17 and 20 hence supersedes the conditional statement "Such sequencing could be established on a random basis." stated within the description of their disclosure. It is thus believed that the directions to which the person is requested to turn his or her eye(s) during the identification procedure are not unknown prior to the identification procedure according to the invention by Derakhshani and Ross.

It is also believed that "...comparing at least some of the stored sequences with acquired eye movement sequences of the individual's eyeball." as stated in claim 21 further implies that acquired eye movement sequences exist at the time of comparison with stored sequences. It thus further believed that the acquiring of image sequences and comparison of these with stored sequences takes place sequentially according to the invention by Derakhshani and Ross.

Further considering Fig. 4A that partially illustrates "a method which includes additional details of vascular scanning and processing in accordance with the invention" and the statement by Derakhshani and Ross "Where a multi-capture mode has been selected, the images are captured from a variety of poses, step 144.", step 144 shown in Fig. 4A comprises "Capture eye images from straight, left, right and roll up poses". This is the only semantics for "sequences of eye movements" found in the disclosure. Furthermore, Derakhshani and Ross do not disclose any mention of a method or system for guiding the person's eye towards accurate rotational bearings. It is thus believed that "sequences of eye movements" according to the invention by Derakhshani and Ross is best characterized as relating to "sequences of the eye(s) turning towards approximate directions" rather than towards specific rotational bearings.

The believed relation between prior art and the features in the independent claims of the present invention is shown in further detail in Fig. 4.

Disclosure of Certain Vulnerabilities: Prior Art vs. Certain Embodiments of the Present Invention

In the following is a detailed disclosure of certain potential vulnerabilities of eye recognition methods and systems, in which certain fundamental differences between certain preferred embodiments of the present invention and prior art are shown.

Fig. 5 illustrates a generic sequence of events that are believed to take place during the identification procedure of contemporary retinal and/or iris recognition systems, when an impos- ter attempts to achieve false identification by exploiting an image or images of the iris(es) and/or retina(s) of another person obtained prior to the identification procedure. In step 100, the imposter is believed to position an image or images of the iris(es) and/or retina(s) of another person in front of the image capturing apparatus, or place onto his or her eye(s) a contact lens or lenses with a printed image or images of the iris(es) and/or retina(s) of another person. In step 101, the image capturing apparatus is believed to illuminate the false image(s) of the iris(es) and/or retina(s). In step 102, the image capturing apparatus is believed to capture one or more images of the false image(s) of the iris(es) and/or retina(s). In step 103, the image capturing apparatus is believed to transmit the captured image data to the authentication apparatus, which in steps 200-203 proceeds according to the normal sequence of events believed to take place in contemporary iris and/or retinal recognition systems as elaborated upon in the descriptions of Fig. 1 and Fig. 2 above. Consequently, it is believed that the imposter possesses an opportunity here to achieve false identification using the aforementioned method.

Fig. 6 illustrates a generic sequence of events that are believed to take place during the identification procedure of contemporary retinal and/or iris recognition systems, when an illicit third party is able to intercept data transmitted between the image capturing apparatus and the authentication apparatus after the image capturing phase in steps 100-103. When the image data from the eye(s) of the person subject to identification is transmitted over an insecure communication link, it is believed that the data can be intercepted in additional step 150 undetected by the person and/or the authenticating party, and the identification procedure is believed to proceed as normal in steps 200-203.

As shown in Fig. 7, it is believed that an imposter can later use this data to initiate a false identification procedure in additional step 151 by transmitting data intercepted at a prior time to the authentication apparatus, and the identification procedure is believed to proceed as

normal in steps 200-203. Consequently, it is believed that the imposter possesses another opportunity here to achieve false identification using the aforementioned method.

Fig. 8 illustrates a generic sequence of events believed to take place during the identification procedure according to certain preferred embodiments of the present invention, when an imposter attempts to achieve false identification by positioning an image or images of the iris(es) and/or retina(s) of another person obtained prior to the identification procedure in front of the image capturing apparatus. In step 100, the imposter positions an image or images of the iris(es) and/or retina(s) of another person in front of the image capturing apparatus. In step 101, the image capturing apparatus indicates to the imposter the direction(s) towards which his or her eye(s) should be turned. In step 102, the image capturing apparatus illuminates the false image(s) of the iris(es) and/or retina(s). In step 103, the image capturing apparatus captures one or more images of the false image(s) of the iris(es) and/or retina(s). In step 104, the image capturing apparatus transmits the captured image data to the authentication apparatus. In step 200, the authentication apparatus analyzes the fragments of information upon receipt from the image capturing apparatus, and detects that the eye(s) subject to analysis fail(s) to turn as instructed. In steps 201-203 the authentication apparatus proceeds according to the normal sequence of events as elaborated upon in the description of Fig. 3 above. In step 204, the authentication apparatus transmits the result of failed identification to relevant parties and terminates the identification procedure. Consequently, it is believed that the imposter lacks the opportunity here to achieve false identification using the aforementioned method.

Fig. 9 illustrates a generic sequence of events believed to take place during the identification procedure according to certain preferred embodiments of the present invention, when an imposter attempts to achieve false identification by placing a contact lens or lens(es) onto his or her eye(s) with a printed image or images of the iris(es) and/or retina(s) of another person obtained prior to the identification procedure. In step 100, the imposter places a contact lens or lenses onto his or her eye(s) with a printed image or images of the iris(es) and/or retina(s) of another person. In step 101, the image capturing apparatus indicates to the imposter the direction(s) towards which his or her eye(s) should be turned. In step 102, the image capturing apparatus illuminates the false image(s) of the iris(es) and/or retina(s). In step 103, the image capturing apparatus captures one or more images of the false image(s) of the iris(es) and/or retina(s). In step 104, the image capturing apparatus transmits the captured image data to the authentication apparatus. In step 200, the authentication apparatus analyzes the frag-

ments of information upon receipt from the image capturing apparatus, and detects that (1) the images of the iris(es) taken from an angle fail to correspond to those taken of an authentic three-dimensional iris or irises, (2) blood vessels and patterns on exterior of the eye(s), apart from the iris(es), fail to correspond to those on the authentic eye(s), and/or (3) images of the retina(s) taken from an angle fail to correspond to those taken of a retina or retinas within an authentic three-dimensional eye or eyes. In steps 201—203 the authentication apparatus proceeds according to the normal sequence of events as elaborated upon in the description of Fig. 3 above. In step 204, the authentication apparatus transmits the result of failed identification to relevant parties and terminates the identification procedure. Consequently, it is believed that the imposter lacks the opportunity here to achieve false identification using the aforementioned method.

Fig. 10 illustrates a generic sequence of events believed to take place during the identification procedure according to certain preferred embodiments of the present invention, when an illicit third party is able to intercept data transmitted between the image capturing apparatus and the authentication apparatus after the image capturing phase in steps 100-103. When the information on the eye(s) of the person subject to identification is transmitted over an insecure communication link, the information is intercepted in additional step 150 undetected by the person and/or the authenticating apparatus, and the identification procedure proceeds according to the normal sequence of events as elaborated upon in the description of Fig. 3 above.

Fig. 11 illustrates a generic sequence of events believed to take place during the identification procedure according to certain preferred embodiments of the present invention, when an imposter attempts to use fragments of information on the person's eye(s) intercepted at a prior time. In step 100, an apparatus controlled by the imposter requests initiation of the identification procedure from the authentication apparatus. In step 101, the imposter' s apparatus receives guidance data in response from the authentication apparatus, referring to the direction(s) towards which the imposter should turn his or her eye(s). In step 102, the imposter continuously transmits the fragments of information in his or her possession to the authentication apparatus. In step 200, the authentication apparatus analyzes the fragments of information upon receipt from the imposter' s apparatus, and detects that these fail to correspond to those expected from the image capturing apparatus as the authentic and live eye(s) turns towards the given direction(s). In steps 201—203 the authentication apparatus proceeds according to the normal sequence of events as elaborated upon in the description of Fig. 3

above. In step 204, the authentication apparatus transmits the result of failed identification to relevant parties and terminates the identification procedure. Consequently, it is believed that the imposter lacks the opportunity here to achieve false identification using the aforementioned method.

Fig. 12 illustrates the generic sequence of events that must take place before, during and after the identification procedure according to certain preferred embodiments of the present invention, in order for an imposter to obtain means of achieving false identification. In step 50, an illicit third party intercepts guidance data transmitted from the authentication apparatus to the image capturing apparatus. In steps 100-103 the image capturing apparatus proceeds according to the normal sequence of events as elaborated upon in the description of Fig. 3 above. In additional step 150, the illicit third party intercepts the fragments of information transmitted from image capturing apparatus to the authentication apparatus undetected by the person and/or the authenticating party, and the identification procedure proceeds according to the normal sequence of events as elaborated upon in the description of Fig. 3 above. After the identification procedure, in step 300 the illicit third party has obtained fragments of information regarding all areas of the exterior and interior of the eye(s) and in step 301 all guidance data transmitted during identification process. In step 302, the illicit third party obtains from another source further knowledge on the semantics of the intercepted guidance data, preferably unique to each image capturing apparatus or plurality of image capturing apparatus, and by exploiting in step 303 this knowledge derives the direction(s) indicated to the person during the identification procedure in steps 100-103, as well as the timing of the indication of the direction(s). In step 304, using the directions and intercepted fragments of information regarding all areas of the eye(s), the illicit third party constructs a complete physical replica of the exterior and interior of the eye(s) that rotate(s) in response to the guidance data transmitted from the authentication server precisely as the authentic and live eye(s), or a simulator that generates a continuous stream of fragments of information in response to the guidance data from transmitted from the authentication server in a manner identical to the image capturing apparatus when positioned in front of the authentic and live eye(s) of the person.

Disclosure of the Extent of Information Captured from an Eye: Prior Art vs. Present Invention

In the following is a disclosure of certain issues related to the extent of information captured from the eye(s) of the person subject to identification wherein certain fundamental differ-

ences between prior art and certain preferred embodiments of the present invention are shown.

Fig. 13 is a representative illustration of a contemporary retinal recognition system capturing an image of the retina of the eye 1. The image capturing apparatus 100 is believed to focus through the lens 4 on the interior 2 of the eye 1, capturing the visible portion 110 of the interior 2 of the eye I 3 which is projected back to the image capturing apparatus 100 constituting the captured image 120.

When the eye 1 is centered and stationary, it is believed that the image capturing apparatus 100 can only capture a limited portion of the interior 2 of the eye 1.

Fig. 14 is a representative illustration of a contemporary iris recognition system capturing an image of the iris 3 of the eye 1. The image capturing apparatus 200 is believed to focus on the iris 3, the visible portion 230 of which is projected back to the image capturing apparatus 200 constituting the captured image 240.

When the eye 1 is centered and stationary, it is believed that the image capturing apparatus 200 can only capture a limited portion of the exterior of the eye, including wholly or partially the iris 3 typically as a two-dimensional image.

It is further believed that contemporary iris recognition systems focus on recognition of the iris 3 and thus possible auxiliary image data from other portions of the exterior of the eye 1 are discarded.

Fig. 15 is a representative illustration of the image capturing apparatus 300 capturing an image of the interior 2 of the eye 1, including but not limited to the retina, and the exterior of the eye 1, including but not limited to the iris 3, according to certain preferred embodiments of the present invention. In this example the image capturing apparatus is further equipped with an optical system 301, which enables the image capturing apparatus 300 to view the eye 1 from a plurality of angles. The image capturing apparatus 300 views through the lens 4 the interior 2 of the eye 1, capturing the visible portion 310 of the interior 2 of the eye 1, which is projected back to the image capturing apparatus 300 constituting the captured image 320. In the two-dimensional illustration, the visible portion 310 of the interior 2 of the eye 1 is shown as an arc limited by an upper end point 351 and a lower end point 352. Similarly, the image capturing apparatus 300 views the visible portion 330 of the exterior of the eye 1,

which is projected back to the image capturing apparatus 300 constituting the captured image

340.

Fig. 16 is a representative illustration of the image capturing apparatus 300 and eye 1 as shown in Fig. 15, when the eye 1 has turned counterclockwise from the perspective of the two-dimensional illustration. The visible portion 311 of the interior 2 of the eye 1 is projected back to the image capturing apparatus 300 constituting the captured image 321, the upper end point 353 of which is situated beyond the upper end point 351 of the visible portion 310 of the interior 2 of the eye 1 in Fig. 15 when the eye 1 was leveled horizontally. Similarly, the image capturing apparatus 300 views the visible portion 331 of the exterior of the eye 1, which is projected back to the image capturing apparatus 300 constituting the captured image 341. The lower end point of visible portion 331 of the exterior of the eye 1 is situated beyond the lower end point of the exterior of the eye 1 in Fig. 15 when the eye 1 was leveled horizontally, and the iris 3 is viewed from a sharper angle.

Fig. 17 is a representative illustration of the image capturing apparatus 300 and eye 1 as shown in Fig. 15, when the eye 1 has turned clockwise from the perspective of the two- dimensional illustration. The visible portion 312 of the interior 2 of the eye 1 is projected back to the image capturing apparatus 300 constituting the captured image 322, the lower end point 356 of which is situated beyond the lower end point 352 of the visible portion 310 of the interior 2 of the eye 1 in Fig. 15 when the eye 1 was leveled horizontally. Similarly, the image capturing apparatus 300 views the visible portion 332 of the exterior of the eye 1, which is projected back to the image capturing apparatus 300 constituting the captured image 342. The upper end point of visible portion 332 of the exterior of the eye 1 is situated beyond the upper end point of the exterior of the eye 1 in Fig. 15 when the eye 1 was leveled horizontally, and the iris 3 is again viewed from a sharper angle.

It is thus believed that by means of instructing the person subject to identification to turn his or her eye as indicated by the image capturing apparatus, information from broader areas of both the interior and exterior of the eye can be made available to the authentication apparatus. In addition, it is believed that certain areas of the eye, including but not limited to the iris, can be viewed from a plurality of angles, enabling the capturing of three-dimensional information on these areas.

The dimensions and/or geometries in illustrations Fig. 15 through Fig. 17 are indicative and exemplary only, and the actual areas of the interior and/or exterior of the eye will vary ac-

cording to the dimensions and/or geometries of illustrated components, and in particular, variation of the dimensions and/or geometries can contribute to broader portions, in comparison with those shown in illustrations Fig. 15 through Fig. 17, of the interior and/or exterior of the eye becoming visible to the image capturing apparatus.

No preference is made regarding the perspectives of the drawings, i.e. from which directions the components portrayed in the drawings are viewed. The drawings are for the sole purpose of demonstrating certain aspects of certain embodiments of the present invention regardless of whether interpreted as representing a top view, a bottom view, a side view, or any other possible view of the portrayed components.

The optical system 301 is included in the illustrations for the exclusive purpose of demonstrating the function of transforming the direction(s) and shape(s) of images and/or light during the identification procedure. No conclusions of any kind should be drawn from these illustrations regarding the contents, structure(s), material(s), shape(s), dimensions and/or geometries of the optical system 301 when implemented as part of the disclosed invention.

BRIEF DESCRIPTION OF DRAWINGS

Illustrations Fig. 1 through Fig. 17 are related to certain aspects of prior art and certain preferred embodiments of the present invention, as follows:

Fig. 1 is a representative illustration of a sequence of events that are believed to take place during the identification procedure of contemporary retinal recognition systems.

Fig. 2 is a representative illustration of a sequence of events that are believed to take place during the identification procedure of contemporary iris recognition systems.

Fig. 3 is a representative illustration of a sequence of events that take place during the identification procedure according to certain preferred embodiments of the present invention.

Fig. 4 is a table showing the believed relation between prior art and the features in the independent claims of the present invention.

Fig. 5 is a representative illustration of a sequence of events that are believed to take place during the identification procedure of contemporary retinal and/or iris recognition systems, when an imposter attempts to achieve false identification by exploiting an image of the iris and/or retina of another person obtained prior to the identification procedure.

Fig. 6 is a representative illustration of a sequence of events that are believed to take place during the identification procedure of contemporary retinal and/or iris recognition systems, when an illicit third party is able to intercept data transmitted between the image capturing apparatus and the authentication apparatus after the image capturing phase.

Fig. 7 is a representative illustration of a sequence of events that are believed to take place during the identification procedure of contemporary retinal and/or iris recognition systems, when an imposter attempts to initiate a false identification procedure using image data from a person's eye(s) intercepted at a prior time.

Fig. 8 is a representative illustration of a sequence of events that take place during the identification procedure according to certain preferred embodiments of the present invention, when an imposter attempts to achieve false identification by positioning a printed image or images of the iris(es) and/or retina(s) of another person obtained prior to the identification procedure in front of the image capturing apparatus.

Fig. 9 is a representative illustration of a sequence of events that take place during the identification procedure according to certain preferred embodiments of the present invention, when an imposter attempts to achieve false identification by placing a contact lens or lenses onto his or her eye(s) with a printed image or images of the iris(es) and/or retina(s) of another person obtained prior to the identification procedure.

Fig. 10 is a representative illustration of a sequence of events that takes place during the identification procedure according to certain preferred embodiments of the present invention, when an illicit third party is able to intercept data transmitted between the image capturing apparatus and the authentication apparatus during the image capturing phase.

Fig. 11 is a representative illustration of a sequence of events that take place during the identification procedure according to certain preferred embodiments of the present invention, when an imposter attempts to use fragments of information on the person's eye(s) intercepted at a prior time.

Fig. 12 is a representative illustration of the sequence of events that should take place before, during and after the identification procedure according to certain preferred embodiments of the present invention, in order for an imposter to obtain means of achieving false identification.

Fig. 13 is a representative illustration of a contemporary retinal recognition system capturing an image of the retina of the eye.

Fig. 14 is a representative illustration of a contemporary iris recognition system capturing an image of the iris of the eye.

Fig. 15 is a representative illustration of the image capturing apparatus capturing an image of the interior and the exterior of the eye, according to certain preferred embodiments of the present invention.

Fig. 16 is a representative illustration of the image capturing apparatus and eye as shown in Fig. 15, when the eye has turned counterclockwise from the perspective of the illustration.

Fig. 17 is a representative illustration of the image capturing apparatus and eye as shown in Fig. 15, when the eye has turned clockwise from the perspective of the illustration.

Illustrations Fig. 18 through Fig. 28 are related to certain preferred embodiments of the present invention, as follows:

Fig. 18 is a representative illustration of a preferred sequence of events that take place during the identification procedure according to one preferred embodiment of the present invention wherein the image data captured from the eye(s) constitutes the information transmitted from the image capturing apparatus to the authentication apparatus.

Fig. 19 is a representative illustration of a preferred sequence of events that take place during the identification procedure according to one preferred embodiment of the present invention wherein the characteristics extracted from the image data captured from the eye(s) constitute the information transmitted from the image capturing apparatus to the authentication apparatus.

Fig. 20 is a representative illustration of a preferred sequence of events that take place during the identification procedure according to one preferred embodiment of the present invention including encryption and decryption steps wherein the image data captured from the eye(s) constitutes the information transmitted from the image capturing apparatus to the authentication apparatus.

Fig. 21 is a representative illustration of a preferred sequence of events that take place during the identification procedure according to one preferred embodiment of the present invention

including encryption and decryption steps wherein the characteristics extracted from the image data captured from the eye(s) constitute the information transmitted from the image capturing apparatus to the authentication apparatus.

Fig. 22 is a representative front view illustration of one preferred exemplary embodiment of an image capturing apparatus according to certain preferred embodiments of the present invention, in its physically "closed" state.

Fig. 23 is a representative rear view illustration of one preferred exemplary embodiment of an image capturing apparatus according to certain preferred embodiments of the present invention, in its physically "closed" state.

Fig. 24 is a representative illustration of one preferred exemplary embodiment of an image capturing apparatus according to certain preferred embodiments of the present invention, in its physically "semi-open" state.

Fig. 25 is a representative front view illustration of one preferred exemplary embodiment of an image capturing apparatus according to certain preferred embodiments of the present invention, in its physically "fully open" state.

Fig. 26 is a representative rear view illustration of one preferred exemplary embodiment of an image capturing apparatus according to certain preferred embodiments of the present invention, in its physically "fully open" state.

Fig. 27 is a representative illustration of the preferred components of one preferred exemplary embodiment of the upper part of the image capturing apparatus according to certain preferred embodiments of the present invention.

Fig. 28 is a representative illustration of the preferred components of the lower part, as well as a dual-axis hinge preferably connecting the upper and lower part, of one preferred exemplary embodiment of the image capturing apparatus according to certain preferred embodiments of the present invention.

Illustrations Fig. 29 through Fig. 39 are related to certain preferred embodiments of the present invention in certain environments and situations, and certain security measures therein, as follows:

Fig. 29 is a representative illustration of a situation in which a person is identified and authenticated according to the present invention via a mobile handset that is connected to a cellular network.

Fig. 30 is a representative illustration of a situation in which a person is identified and authenticated according to the present invention via a desktop computer that is connected via a fixed- wired connection to a data network.

Fig. 31 is a representative illustration of a situation in which a person is identified and authenticated according to the present invention via a portable computer that is connected via a wireless connection to a data network.

Fig. 32 is a representative illustration of a situation in which a person is identified and authenticated according to the present invention at an unsupervised point-of-presence via a wireless data connection.

Fig. 33 is a representative illustration of a situation in which a person is identified and authenticated according to the present invention at a supervised point-of-presence via a wireless data connection.

Fig. 34 is a representative illustration of a situation in which a person is identified and authenticated according to the present invention in an isolated environment that lacks external data network and/or cellular network connectivity.

Fig. 35 is a representative illustration of the method for providing secure data transfer when a person is identified and authenticated via a mobile handset that is connected to a cellular network.

Fig. 36 is a representative illustration of the method for providing secure data transfer when a person is identified and authenticated via a desktop computer that is connected to a data network via a fixed- wired data connection.

Fig. 37 is a representative illustration of the method for providing secure data transfer when a person is identified and authenticated via a portable computer that is connected to a wireless data network.

Fig. 38 is a representative illustration of the method for providing secure data transfer when a person is identified and authenticated via a wireless data network at either an unsupervised or a supervised point-of-presence.

Fig. 39 is a representative illustration of the method for providing secure data transfer when a person is identified and authenticated by a stand-alone identification and authentication system in an isolated environment that lacks external data network and/or cellular network connectivity.

The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views. However, like parts do not always have like reference numerals. Moreover, all illustrations are intended to convey concepts in which relative sizes, shapes and other detailed attributes may be illustrated schematically rather than literally or precisely.

BEST MODE FOR CARRYING OUT THE INVENTION

Disclosure of Several Preferred Embodiments of the Present Invention

One preferred embodiment of the present invention preferably comprises: (1) an image capturing apparatus, placed in front of the eye(s) of the person subject to identification (hereafter the "person"); (2) an authentication apparatus that is situated either in the vicinity of the person or at a remote location; (3) an identification database that comprises information on the eyes of registered persons; and (4) a security database that comprises digital keys for encryption and decryption purposes, in which the authentication apparatus, the identification database and security database are interconnected via a secure means. A data communication link is preferably used for communication between the image capturing apparatus and the authentication apparatus.

In another preferred embodiment of the present invention, the identification method comprises: Step 100: the person positioning an image capturing apparatus in front of his or her eye(s) and initiating the identification procedure; Step 101: the image capturing apparatus preferably notifying the person of initiation of the identification procedure; Step 102: the image capturing apparatus establishing a communication link with a remote authentication apparatus; Step 103: the image capturing apparatus transmitting a notification regarding initiation of the identification procedure to the authentication apparatus; Step 200: the authentica-

tion apparatus receiving notification regarding initiation of the identification procedure from the image capturing apparatus; Step 219: the authentication apparatus determining a new direction or series of new directions to which the person should turn his or her eye(s); Step 220: the authentication apparatus deriving guidance data from the direction or series of directions; Step 222: the authentication apparatus transmitting the guidance data to the image capturing apparatus; Step 110: the image capturing apparatus receiving the guidance data from the authentication apparatus; Step 112: the image capturing apparatus deriving from the guidance data the next direction or series of next directions to which the person should turn his or her eye(s); Step 113: the image capturing apparatus indicating to the person the direction to which the person should turn his or her eye(s); Step 114: the image capturing apparatus illuminating one or more portions of the eye(s); Step 115: while the eye(s) turn(s) towards the indicated direction, the image capturing apparatus capturing subsequent images of portions of the eye(s); Step 118: the image capturing apparatus transmitting information captured from the eye(s) to the authentication apparatus; Step 119: the image capturing apparatus analyzing whether the eye has or eyes have turned towards or close to the indicated direction, if not, the image capturing apparatus retracing from step 115; Step 120: the image capturing apparatus verifying whether all directions given by the authentication apparatus have been processed, if not, the image capturing apparatus retracing from step 113; Step 121: in the case that all directions given by the authentication apparatus have been processed, the image capturing apparatus awaiting new guidance data or notification of termination of the identification procedure from the authentication apparatus; Step 210: the authentication apparatus receiving the information transmitted by the image capturing apparatus in step 118; Step 213: the authentication apparatus combining the received information into a composite set of information derived from all previous information on the eye(s) captured since initiation of the identification procedure; Step 214: the authentication apparatus determining whether the quality and quantity of information within the composite set of information on the eye(s) are adequate for attempting matching within an identification database containing information on eyes of known persons — if not, the authentication apparatus skipping to step 218; Step 215: the authentication apparatus comparing the composite set of information on the person's eye(s) against information stored in the identification database; Step 216: the authentication apparatus verifying whether a match is achieved at adequate level of confidence - if not, the authentication apparatus skipping to step 218; Step 217: the authentication apparatus transmitting confirmation of successful authentication to the image capturing apparatus and possible third parties, and terminating the identification procedure at the authenti-

cation apparatus; Step 218: the authentication apparatus determining whether a new direction or series of new directions to which the person should turn his or her eye(s) are required, if not the authentication apparatus skipping to step 223, otherwise the authentication apparatus proceeding to step 219 as described above; Step 190: in the case that the person indicates abortion of the identification procedure to the image capturing apparatus, or the image capturing apparatus decides to abort identification procedure due to conditions such as but not limited to time constraints, the communication link being disrupted, and/or the authentication apparatus failing to respond as expected, the image capturing apparatus proceeding from step 192, otherwise no additional actions being taken by the image capturing apparatus beyond and due to this step; Step 191: in the case that notification of termination of the identification procedure, and preferably together with the result of the identification procedure, is received from the authentication apparatus by the image capturing apparatus, the image capturing apparatus proceeding from step 192, otherwise no additional actions being taken by the image capturing apparatus beyond and due to this step; Step 192: the image capturing apparatus preferably displaying the result of the identification procedure to the person if available; Step 193: the image capturing apparatus disconnecting the communication link between the image capturing apparatus and the authentication apparatus; Step 194: the image capturing apparatus terminating the identification procedure at the image capturing apparatus; Step 290: in the case that the authentication apparatus decides to abort the identification procedure due to conditions such as but not limited to time constraints, the communication link being disrupted, and/or the image capturing apparatus failing to respond as expected, the authentication apparatus proceeding from step 292, otherwise no additional actions being taken by the authentication apparatus beyond and due to this step; Step 291: in the case that notification of termination of the identification procedure is received from the image capturing apparatus by the authentication apparatus, the authentication apparatus proceeding from step 292, otherwise no additional actions being taken by the authentication apparatus beyond and due to this step; and Step 292: the authentication apparatus transmitting confirmation of unsuccessful identification to possible third parties and if possible to the image capturing apparatus, and then terminating the identification procedure at authentication apparatus.

In yet another preferred embodiment of the present invention, preferably either (1) the image data captured from the eye(s) constitutes the information to be transmitted to the authentication apparatus in step 118, in which case upon receipt of the information transmitted by the image capturing apparatus in step 210, the authentication apparatus preferably extracts characteristics from the captured image in additional step 212 executed after step 210 and prior to

step 213, or (2) characteristics extracted from the image data captured from the eye(s) constitute the information to be transmitted to the authentication apparatus in step 118, in which case the image capturing apparatus preferably extracts characteristics from the captured image in additional step 116 executed after step 115 and prior to step 118.

The sequence of events that preferably takes place during the identification procedure according to the aforementioned preferred embodiment of the present invention is illustrated in Fig. 18 when the image data captured from the eye(s) constitutes the information transmitted from the image capturing apparatus to the authentication apparatus, and in Fig. 19 when characteristics extracted from the image data captured from the eye(s) constitute the information to be transmitted from the image capturing apparatus to the authentication apparatus.

In yet another preferred embodiment of the present invention, communication between the image capturing apparatus and the authentication apparatus is preferably partially or wholly channeled via (1) one or more cellular networks, in which case communication between the image capturing apparatus and the authentication apparatus is relayed via the person's mobile communication terminal (e.g. mobile handset) that communicates with the image capturing apparatus either wirelessly (e.g. Bluetooth ® and/or UWB link), via wire (e.g. USB or Fire Wire cable), or via directly docking to an appropriate outlet of the mobile communication terminal (e.g. USB or Fire Wire port); and/or (2) one or more computer networks (e.g. the Internet), in which case communication between the image capturing apparatus and the authentication apparatus is preferably relayed via a network-enabled data terminal (e.g. desktop computer or portable computer) that communicates with the image capturing apparatus either wirelessly (e.g. Bluetooth ® and/or UWB link), via wire (e.g. USB or Fire Wire cable), or via directly docking to an appropriate outlet of the data terminal (e.g. USB or Fire Wire port).

According to yet another preferred embodiment of the present invention, in the case of the person residing in the vicinity of a point-of-presence at which authentication is required, communication between the image capturing apparatus and the authentication apparatus is preferably partially or wholly channeled via a local access point, preferably a wireless network, situated at the point-of-presence, in which case communication between the image capturing apparatus and the authentication apparatus is preferably relayed via the access point that communicates with the image capturing apparatus either wirelessly (e.g. Bluetooth ® , UWB or WLAN), via wire (e.g. USB or Fire Wire cable), or via directly docking to an appropriate outlet at the point-of-presence. »

In yet another preferred embodiment of the present invention, the image capturing apparatus may preferably be further equipped with an optical system, comprising one or more optical elements, preferably allowing (1) the illuminating portion of the image capturing apparatus to illuminate portions of the eye(s) simultaneously from a plurality of angles, preferably as if the sources of illumination were situated on a concave double-curved surface or surfaces in front of the eye(s); and (2) the image capturing sensor of the image capturing apparatus to view portions of the eye(s) simultaneously from a plurality of angles, preferably from the perspective of a concave double-curved surface or surfaces in front of the eye(s), and consequently forming three-dimensional images of visible portions of the eye(s).

In yet another preferred embodiment of the present invention, the guidance data to be transmitted in step 222 by the authentication apparatus to the image capturing apparatus may preferably first be encrypted by the authentication apparatus in additional step 221 executed after step 220 and prior to step 222, and decrypted by the image capturing apparatus in additional step 111 executed after step 110 and prior to step 112.

In yet another preferred embodiment of the present invention, the information on the person's eye(s) to be transmitted in step 118 by the image capturing apparatus to the authentication apparatus may also preferably first be encrypted by the image capturing apparatus in additional step 117 executed after step 115, and after step 116 when appropriate, and prior to step 118, and decrypted by the authentication apparatus in additional step 211 executed after step 210 and prior to step 213, and prior to step 212 when appropriate.

In yet another preferred embodiment of the present invention, the aforementioned encryption and decryption algorithm(s) may preferably make use of one or more of the following: (1) a personal digital key unique to the person; (2) a device-specific digital key unique to the image capturing apparatus and/or to a detachable identification component that is inserted into the image capturing apparatus; (3) a link-specific digital key unique to the person's communication apparatus and/or to a detachable identification component inserted into the person's communication apparatus, such as but not limited to a mobile phone subscription number, an International Mobile Equipment Identity (IMEI) number, an Internet Protocol (IP) address, or a Media Access Control (MAC) address; (4) a unique location-specific digital key, such as but not limited to (a) the digital key of the point-of-presence at which the person resides, (b) the logical address of the base station apparatus via which the person's mobile handset is accessing the cellular network, (c) the logical address of the base station apparatus via which the person's wireless networking enabled computer is accessing the data network, or (d) the

logical address of the network router apparatus via which the person's computer is accessing the data network via a fixed-wired data connection; (5) a unique operator-specific digital key, such as but not limited to (a) the digital key of the organization controlling the point-of- presence, (b) the digital key of the cellular network operator organization via which the person' s mobile handset is accessing the cellular network, or the digital key of the data network operator organization via which the person's computer is accessing the data network; and (6) a time-code that varies over time.

The sequence of events that preferably take place during the identification procedure according to the aforementioned preferred embodiments of the disclosed invention including encryption and decryption steps is illustrated in Fig. 20 when the image data captured from the eye(s) constitutes the information transmitted from the image capturing apparatus to the authentication apparatus, and in Fig. 21 when characteristics extracted from the image data captured from the eye(s) constitute the information transmitted from the image capturing apparatus to the authentication apparatus.

In yet another preferred embodiment of the present invention, the image capturing apparatus may preferably further comprise a mechanism for detecting abnormal conditions during the identification procedure, preferably triggered by the person consistently looking away from the directions indicated to the person, in which case an alarm signal is preferably transmitted to the authentication apparatus, which possibly relays notification of the alarm to a third party or third parties.

In yet another preferred embodiment of present invention, the image capturing apparatus is preferably a small-size lightweight mobile electronic apparatus (hereafter the "mobile apparatus"), which is preferably easily carried by the user, e.g. in the user's pocket and/or as a key ring or the like.

In yet another preferred embodiment of present invention, the mobile apparatus preferably comprises two main parts, the upper and lower parts.

In yet another preferred embodiment of present invention, the upper and lower parts of the mobile apparatus are preferably attached to one another by means of a preferably dual-axis hinge, due to which the mobile apparatus can preferably comprises three different physical states: (1) closed, (2) semi-open, and (3) fully open.

Fig. 22 illustrates the front view and Fig. 23 the rear view of one preferred embodiment of the mobile apparatus in its physically "closed" state. The upper part 100 and lower part 200 are preferably attached to one another by means of the dual-axis hinge 300 preferably further comprising the upper axis 310 and the lower axis 320. The lower part 200 preferably also comprises the external connector part 235, which in this preferred exemplary embodiment preferably comprises a single USB socket that preferably serves as both an external connector for data transfer and electrical power intake. In other preferred embodiments of the mobile apparatus, the external connector part 235 may preferably comprise one or more dedicated sockets for electrical power intake and/or one or more dedicated sockets for data transfer.

Fig. 24 illustrates one preferred embodiment of the mobile apparatus in its physically "semi- open" state, which is reached from the "closed" state preferably by rotating the upper part 100 around the upper axis 310 of the dual-axis hinge 300 a total of 180 degrees. In this state, the optical system 140 and the eye padding 150 of the upper part 100, as well as the user control part 210 of the lower part 200, are preferably exposed. In this preferred exemplary embodiment, the user control part 210 preferably comprises three pushbuttons that preferably grant the user a degree of control over the mobile apparatus and the identification procedure. In other preferred embodiments of the mobile apparatus, the user control part 210 may preferably comprise one or more pushbuttons; switches; indicators; displays; touch-sensitive controls; motion-sensitive controls; microphones; loudspeakers; and/or any other types of user interface components.

Fig. 25 illustrates the front view and Fig. 26 the rear view of one preferred embodiment of the mobile apparatus in its physically "fully open" state, which is reached from the "semi- open" state preferably by rotating the lower part 200 around the lower axis 320 of the dual- axis hinge 300 a total of 180 degrees. The visible parts of the mobile apparatus in the "fully open" state are preferably identical with those in the "semi-open" state, and thus the user may preferably operate the mobile apparatus in either of these states.

Fig. 27 is a representative illustration of the preferred components of one preferred embodiment of the upper part 100 of certain preferred embodiments of the mobile apparatus as follows: (1) the upper part casing 110 preferably comprising a portion for connecting to the upper axis 310 of the dual-axis hinge 300 and which preferably is detachable, enabling the user to select the upper part casing 110 of his or her preference from a preferably wide variety of shapes, patterns and colors; (2) the upper part padding 120 preferably included to protect the

contents of the upper part 100 of the mobile apparatus from physical shocks; (3) the image capturing and illumination part 130 comprising but not limited to one or more integrated circuits for capturing images and preferably preprocessing captured images, preferably further comprising a massively parallel array or mesh of light-sensitive data processing units, as well as preferably an array or mesh of sources of illumination that are preferably but not necessarily embedded within the image capturing integrated circuit(s) and that emit a single wavelength or preferably a plurality of wavelengths, in such a manner that different portions of the eye may be illuminated using a plurality of wavelengths that may be varied during the identification procedure depending on the characteristics of different portions of the eye, specific physiological characteristics of the eye, external circumstances, or other reasons due to which the quality of the information can be enhanced, and the sources of illumination may additionally be used to indicate to the user the direction(s) to turn his or her eye during the identification procedure and/or to indicate various information to the user by illuminating various patterns of light, such as but not limited to letters, numbers and other characters; (4) preferably an optical system 140 comprising one or more optical elements, allowing (a) the illuminating portion of the image capturing and illumination part 130 to illuminate portions of the eye simultaneously from a plurality of angles, preferably as if the sources of illumination were situated on a concave double-curved surface in front of the eye, and (b) the image capturing portion of the image capturing and illumination part 130 to view portions of the eye simultaneously from a plurality of angles, preferably from the perspective of a concave double-curved surface in front of the eye, and consequently forming three-dimensional images of visible portions of the eye; and (5) preferably an eye padding part 150, against which the eye is pressed during the identification procedure preferably blocking light from the outside sources from the area between the eye and the optical system 140, and which preferably consists of a soft flexible material and preferably further comprises a mechanism for detecting physical contact of the eye padding part 150 with another physical object. In other preferred embodiments of the mobile apparatus the image capturing and illumination part 130 may also comprise one or more display units other than the aforementioned sources of illumination for indicating various information to the user.

Fig. 28 is a representative illustration of the preferred components of one preferred embodiment of the lower part 200 of certain preferred embodiments of the mobile apparatus, as well as the dual-axis hinge 300, as follows: (1) the lower part casing 250 preferably comprising a portion for connecting to the lower axis 320 of the dual-axis hinge 300 and which preferably is detachable, enabling the user to select the lower part casing 250 of his or her preference

from a preferably wide variety of shapes, patterns and colors; (2) the lower part padding 240 preferably included to protect the contents of the lower part 200 of the mobile apparatus from physical shocks; (3) the data processing and wireless radio communication part 230 comprising one or more integrated circuits, such as but not limited to application-specific integrated circuits; microprocessors; re-configurable integrated circuits; volatile and/or nonvolatile memory circuits; digital signal processing integrated circuits; integrated circuits for carrying out wireless radio communication operations; integrated circuits for geographical positioning; integrated circuits for managing the external connector part 235; RFID circuits; and integrated circuits for controlling electrical power intake, electrical power consumption, and/or the local electrical power source part 220; as well as optionally one or more sockets and/or expansion slots for inclusion of auxiliary detachable components, such as but not limited to memory components, components that implement specific algorithmic functionality, and components that comprise identification information; (4) the external connector part 235, which in this preferred exemplary embodiment comprises a single USB socket that serves as both an external connector for data transfer and electrical power intake; (5) the local electrical power source part 220, which comprises a preferably rechargeable battery, fuel cell, capacitor, and/or other means of storing and providing electricity within the mobile apparatus, as well as preferably protective casing 225 between the local electrical power source part 220 and the data processing and wireless radio communication part 230; (6) the user control part 210, which in this exemplary preferred embodiment comprises three pushbuttons but in other preferred embodiments of the mobile apparatus may preferably comprise one or more pushbuttons, switches, indicators, displays, touch-sensitive controls, motion-sensitive controls, microphones, loudspeakers, and/or any other types of user interface components; and (7) the dual-axis hinge 300, preferably further comprising the upper axis 310 and the lower axis 320, which preferably connects the upper part 100 and the lower part 200 of the mobile apparatus. In other preferred embodiments, the mobile apparatus may preferably operate without the local electrical power source part 230 present when connected via the external connector part 235 to an external electrical power source, and in certain preferred embodiments, in particular those equipped with a USB socket enabling continuous electrical power intake from another USB-enabled apparatus, such as but not limited to a mobile handset or computer, the local electrical power source may preferably be completely omitted from the design. In further preferred embodiments the external connector part 235 may preferably comprise one or more dedicated sockets for electrical power intake and/or one or more dedicated sockets for data transfer as well as any other possible means of data transfer excluding

data transfer via wireless radio broadcast technology, such as but not limited to data transfer over a point-to-point link using infrared wavelengths, in addition to or in place of the aforementioned USB socket.

Wiring between the parts in illustrations Fig. 22 through Fig. 28 has been omitted from the illustrations for the sake of clarity, and preferably comprises the following: (1) electrical power wiring between the electrical power intake portion of external connector part 235, the local electrical power source part 220, the data processing and wireless radio communication part 230, the user control part 210, the image capturing and illumination part 130, and/or the eye padding part 150 in the case that this comprises the aforementioned mechanism for detection of physical contact; and (2) data transfer wiring between the data transfer portion of external connector part 235, the data processing and wireless radio communication part 230, the user control part 210, the image capturing and illumination part 130, and/or the eye padding part 150 in the case that this comprises the aforementioned mechanism for detection of physical contact. In alternative embodiments, one or more portions of the data transfer wiring may be replaced with short-range wireless data communication links. In further embodiments, one or more portions of the data transfer wiring and/or electrical power wiring may preferably be replaced with matching plugs and sockets situated on adjacent walls of the upper part casing 110 and lower part casing 210 when the mobile apparatus is in its "semi- open" and/or "fully open" physical state.

All parts shown in illustrations Fig. 22 through Fig. 28 have been portrayed in the drawings for the exclusive purpose of demonstrating the functionality and conceptual components of certain preferred embodiments of the mobile apparatus. Therefore the shapes, geometries, dimensions, and/or structures of the portrayed components are strictly exemplary and laid out as such for the sole purpose of demonstration of concept. In particular, any viable segregation of the aforementioned parts into the upper part 100 and the lower part 200 of the mobile apparatus, and any integration and/or reordering of the parts, are possible within the scope of the present invention. Furthermore, although the mobile apparatus in the above exemplary preferred embodiment preferably comprises two primary parts, the upper part 100 and the lower part 200, preferably attached to one another by means of a dual-axis hinge, in other preferred embodiments the mobile apparatus may consist of a single part, or two or more parts attached to one another preferably by means of one or more hinges and/or other appropriate mechanisms. Auxiliary parts unrelated to the core concept of the present invention, both mechanical and electrical, are for the sake of clarity omitted from the drawings.

Furthermore, the optical system 140 is included in the drawings for the exclusive purpose of demonstrating its function and relative position in front of the image capturing and illumination part 130 and thus no conclusions should be drawn regarding the shape, dimensions, contents and/or materials of the optical system 140 based upon these illustrations. Due to the aforementioned reasons, no conclusions of any kind should be drawn from these illustrations regarding the structure(s), material(s), shape(s), dimensions and/or geometries of the various preferred embodiments of the present invention.

Although the image capturing apparatus of certain preferred embodiments of the present invention is presented in the form of a mobile apparatus for scanning only one eye at a time, the image capturing apparatus in other embodiments of the present invention can comprise capabilities for scanning both eyes simultaneously, or a pair of single-eye image capturing apparatus can be interconnected to form a dual-eye image capturing apparatus. In addition, one or more image capturing apparatus can be fixed to any type of structure when appropriate, thus constituting a fixed eye recognition installation. Data transfer between the image capturing apparatus and authentication apparatus can also be arranged by other means than wireless data communication, such as but not limited to one or more fixed-wired connections.

Although each of the aforementioned preferred embodiments of the present invention is disclosed individually, any combination of one or more of said embodiments is possible within the scope of the present invention.

Disclosure of Certain Embodiments of the Present Invention in Certain Environments and Situations

Fig. 29 is a representative illustration of a situation according to one preferred embodiment of the present invention, in which a person is identified and authenticated using a mobile handset that is connected to a cellular network. The person subject to identification 200 activates Ms or her image capturing apparatus 210, which forms a local data connection preferably via a Bluetooth ® and/or UWB link or the like, and/or a USB and/or Fire Wire cable or the like, with the person's mobile handset 510. The mobile handset 510 then forms a data connection via a base station apparatus 501, a cellular core network 500, a cellular-to-data network gateway apparatus 502, a network security apparatus 503, and one or more data networks 300, with a service control apparatus 310 and an authenticating apparatus 400, which has access to an identification database 410 that comprises information on the eyes of regis-

tered persons and a security database 420 that comprises digital keys for encryption and decryption purposes. The authentication apparatus 400 then preferably verifies from the authentication database 410 the prior and/or registered user(s) of the image capturing apparatus 210 prior to initiating the identification procedure. The person 200, the image capturing apparatus 210 and the authentication apparatus 400 then carry out the identification procedure according to the disclosed identification method. When the identification procedure is complete, the authentication apparatus 400 informs, via one or more data networks 300, the service control apparatus 310 and possible third parties of the result of the identification, and in particular (a) preferably whether the person 200 is a prior and/or registered user of the image capturing apparatus 210 and/or (b) when required and in the case that the recipient(s) hold an adequate level of security clearance, the identity of the person 200. The authentication apparatus 400 then terminates the data connection to the cellular-to-data network gateway apparatus 502 and all intermediary apparatus. Image capturing apparatus 210 terminates the data connection to the mobile handset 510, and the person 200 now using only the mobile handset 510 may proceed with possible further activities permitted by the service control apparatus 310.

Fig. 30 is a representative illustration of a situation according to another preferred embodiment of the present invention, in which a person is identified and authenticated using a desktop computer that is connected to a data network via a fixed-wired data connection. The person subject to identification 200 activates his or her image capturing apparatus 210, which forms a local data connection preferably via a Bluetooth ® and/or UWB link or the like, and/or a USB and/or Fire Wire cable or the like, with the person's desktop computer 350. The desktop computer then establishes a data connection, via fixed-wired network controller portion 351 of the computer 350, network router apparatus 302, network security apparatus 303 and one or more data networks 300, with a service control apparatus 310 and an authenticating apparatus 400, which has access to an identification database 410 that comprises information on the eyes of registered persons and a security database 420 that comprises digital keys for encryption and decryption purposes. The authentication apparatus 400 then preferably verifies from the authentication database 410 the prior and/or registered user(s) of the image capturing apparatus 210 prior to initiating the identification procedure. The person 200, the image capturing apparatus 210 and the authentication apparatus 400 then carry out the identification procedure according to the disclosed identification method, the progress of which is preferably displayed to the person 200 in real-time via the desktop computer user interface 352. When the identification procedure is complete, the authentication apparatus

400 informs, via one or more data networks 300, the service control apparatus 310 and possible third parties of the result of the identification, and in particular (a) preferably whether the person 200 is a prior and/or registered user of the image capturing apparatus 210 and/or (b) when required and in the case that the recipient(s) hold an adequate level of security clearance, the identity of the person 200. The authentication apparatus 400 then terminates the data connection to the image capturing apparatus 210 and all intermediary apparatus. The person 200 now using only the desktop computer 350 may proceed with possible further activities permitted by the service control apparatus 310.

Fig. 31 is a representative illustration of a situation according to yet another preferred embodiment of the present invention, in which a person is identified and authenticated using a portable computer that is connected to a wireless data network. The person subject to identification 200 activates his or her image capturing apparatus 210, which forms a local data connection preferably via a Bluetooth ® and/or UWB link or the like, and/or a USB and/or Fire Wire cable or the like, with the person's portable computer 360. The portable computer then establishes a data connection via wireless network controller portion 361 of the computer 360, base station apparatus 301, data network router apparatus 302, data network security apparatus 303, and one or more data networks 300, with a service control apparatus 310 and an authenticating apparatus 400, which has access to an identification database 410 that comprises information on the eyes of registered persons and a security database 420 that comprises digital keys for encryption and decryption purposes. The authentication apparatus 400 then preferably verifies from the authentication database 410 the prior and/or registered user(s) of the image capturing apparatus 210 prior to initiating the identification procedure. The person 200, the image capturing apparatus 210 and the authentication apparatus 400 then carry out the identification procedure according to the disclosed identification method, the progress of which is preferably displayed to the person 200 in real-time via the user interface of portable computer 360. When the identification procedure is complete, the authentication apparatus 400 informs, via one or more data networks 300, the service control apparatus 310 and possible third parties of the result of the identification, and in particular (a) preferably whether the person 200 is a prior and/or registered user of the image capturing apparatus 210 and/or (b) when required and in the case that the recipient(s) hold an adequate level of security clearance, the identity of the person 200. The authentication apparatus 400 then terminates the data connection to the image capturing apparatus 210 and all intermediary apparatus. The person 200 now using only the portable computer 360 may proceed with possible further activities permitted by the service control apparatus 310.

Fig. 32 is a representative illustration of a situation according to yet another preferred embodiment of the present invention, in which a person is identified and authenticated at an un- supervised physical location (hereafter "point-of-presence") via a wireless data network. The person subject to identification 200 activates his or her image capturing apparatus 210, which establishes a data connection via the base station apparatus 140, the data network router apparatus 130 and the internal data network 170 with the access control apparatus 120, which in turn establishes a data connection via one or more data networks 300 with an authentication apparatus 400, which has access to an identification database 410 that comprises information on the eyes of registered persons and a security database 420 that comprises digital keys for encryption and decryption purposes. The authentication apparatus 400 then preferably verifies from the authentication database 410 the prior and/or registered user(s) of the image capturing apparatus 210 prior to initiating the identification procedure. The person 200, the image capturing apparatus 210 and the authentication apparatus 400 then carry out the identification procedure according to the disclosed identification method. When the identification procedure is complete, the authentication apparatus 400 informs the access control apparatus 120 and possible third parties via one or more data networks 300, of the result of the identification, and in particular (a) preferably whether the person 200 is a prior and/or registered user of the image capturing apparatus 210 and/or (b) when required and in the case that the recipient(s) hold an adequate level of security clearance, the identity of the person 200. Finally, the access control apparatus 120 terminates the data connections to (a) the image capturing apparatus 210 and (b) the authentication apparatus 400 and all intermediary apparatus, and instructs the point-of-presence access system 150 via internal data network 170 and data network router apparatus 130, and/or possible third parties via one or more data networks 300, to take appropriate actions.

Fig. 33 is a representative illustration of a situation according to yet another preferred embodiment of the present invention, in which a person is identified and authenticated at a physical location (hereafter "point-of-presence") supervised by a representative of the point- of-presence and/or of the authenticating party via a wireless data network. As the person subject to identification 200 approaches the point-of-presence, the supervisor 100 preferably first requests initiation of the identification procedure from the access control system 120 via the access control user interface 110, data network router apparatus 130 and internal data network 170. The person subject to identification 200 then activates his or her image capturing apparatus 210, which establishes a data connection via the base station apparatus 140, the data network router apparatus 130 and the internal data network 170 with the access control

apparatus 120, which in turn establishes a data connection via one or more data networks 300 with an authentication apparatus 400, which has access to an identification database 410 that comprises information on the eyes of registered persons and a security database 420 that comprises digital keys for encryption and decryption purposes. The authentication apparatus 400 then preferably verifies from the authentication database 410 the prior and/or registered user(s) of the image capturing apparatus 210 prior to initiating the identification procedure. The person 200, the image capturing apparatus 210 and the authentication apparatus 400 then carry out the identification procedure according to the disclosed identification method, the progress of which is preferably displayed to the supervisor 100 in real-time via the point-of- presence user interface 110. When the identification procedure is complete, the authentication apparatus 400 informs the access control apparatus 120 and possible third parties via one or more data networks 300, of the result of the identification, and in particular (a) preferably whether the person 200 is a prior and/or registered user of the image capturing apparatus 210 and/or (b) when required and in the case that the recipient(s) hold an adequate level of security clearance, the identity of the person 200. Preferably the access control apparatus 120 also relays this information to the supervisor 100 via the internal data network 170, the data network router apparatus 130 and the point-of-presence user interface 110. Finally, the access control apparatus 120 terminates the data connections to (a) the image capturing apparatus 210 and (b) the authentication apparatus 400 and all intermediary apparatus, and instructs the point-of-presence access system 150 via internal data network 170 and data network router apparatus 130, and/or possible third parties via one or more data networks 300, and preferably also the supervisor 100 via the internal data network 170, the data network router apparatus 130 and the point-of-presence user interface 110, to take appropriate actions.

Yet another preferred embodiment of the present invention may further be used as a standalone identification system in an isolated environment that lacks external data network and/or cellular network connectivity, such as but not limited to a building, gate, vehicle, vessel, aircraft or mobile weapon system. In this case the image capturing apparatus preferably communicates via a local wireless data network with the stand-alone authentication apparatus. Fig. 34 is a representative illustration of a situation, in which a person 200 approaches a vehicle 600 equipped with a stand-alone identification system according to this preferred embodiment of the present invention. When the person 200 is using the system with the vehicle 600 for the first time, he or she must carry out an initial registration procedure as follows. The person 200 activates his or her image capturing apparatus 210, which transfers one or more encryption/decryption digital keys to the stand-alone authentication apparatus 610

via the direct data transfer apparatus 620 by a secure point-to-point link, such as but not limited to (a) a data cable between a data transfer socket of the image capturing apparatus 210 and a data transfer socket of the direct data transfer apparatus 620, preferably using USB and/or Fire Wire technology; (b) a data transfer socket of the direct data transfer apparatus 620 that connects directly to a data transfer socket of the image capturing apparatus 210, preferably using USB technology; (c) a wireless point-to-point link, preferably using infrared communication technology, between a transmitter/receiver portion of the image capturing apparatus 210 and a transmitter/receiver portion of the direct data transfer apparatus 620, which is physically protected from illicit interception, e.g. by positioning the image capturing apparatus 210 against or close to the direct data transfer apparatus 620 and preventing possible interception by surrounding the connection with padding and/or edges or the like. The person 200 may then use the image capturing apparatus 210, which communicates with the stand-alone authentication apparatus 610 via (a) the antenna 631 of the vehicle 600 and wireless network controller portion 630 of the stand-alone authentication apparatus 610 and/or (b) the direct data transfer apparatus 610 using a secure point-to-point link as described above, to identify himself or herself according to the disclosed identification method. Once the person 200 has been identified and the information on his or her eye(s) is stored in the stand-alone authentication apparatus 610, the person 200 is granted control over the doors and/or the ignition system of the vehicle 600. Similarly, in the future the person 200 is granted access to the doors and/or the ignition system of the vehicle 600 by authenticating himself or herself according to the disclosed identification method. Once successfully identified, the person 200 may via the user controls of the stand-alone authentication apparatus 610 authorize acceptance of one or more additional persons carrying out the aforementioned initial registration procedure and who are, once successfully identified and registered, granted full or limited access rights to the vehicle 600. In the case that the image capturing apparatus 210 of the person 200 is stolen, misplaced or otherwise absent, the person 200 may alternatively access the vehicle as follows: (a) using another image capturing apparatus the person 200 registers the encryption/decryption digital keys at the standalone authentication apparatus 610 as described above; (b) the person authenticates himself or herself according to the disclosed identification method; and (c) using the user controls of the stand-alone authentication apparatus 610, the person 200 may opt to remove the encryption/decryption digital keys of the missing image capturing apparatus 210, and/or optionally opt to retain the encryption/decryption digital keys of the other image capturing apparatus. The aforementioned stand-alone identification system can be used to control access to virtually any type of

physical entity in any environment where electrical power supply is available, and it is particularly useful in isolated environments that lack external data network and/or cellular network connectivity. Hence, the use case above involving a vehicle is purely exemplary and by no means should be interpreted to limit the extent of applicability of the disclosed standalone identification system for any other purposes and/or circumstances.

Although each of the aforementioned preferred embodiments of the present invention is disclosed individually, any combination of one or more of said embodiments is possible within the scope of the present invention.

Disclosure of Certain Embodiments of the Present Invention in Certain Environments and Situations Further Comprising Certain Security Measures

According to certain preferred embodiments of the present invention, transmitted data is preferably encrypted and decrypted in several stages, preferably in such a manner that: (1) the image capturing apparatus may encrypt data using (a) the unique personal digital key of the person, (b) the device-specific digital key of the image capturing apparatus and/or of a detachable identification component that is inserted into the image capturing apparatus and/or (c) the present time-code; and (2) the cellular and/or data network operator organization, or the organization controlling the point-of-presence, may encrypt data using: (a) its unique operator-specific digital key, (b) the present link-specific digital key and/or (c) the present time-code. The authentication apparatus preferably obtains the aforementioned digital keys for encryption and/or decryption of information as follows: (1) the personal digital key is retrieved from the security database based upon on a personal identification number transmitted from the image capturing apparatus; (2) the device-specific digital key is retrieved from the security database based upon on a device serial number and/or a detachable identification module number transmitted from the image capturing apparatus; (3) the link- specific digital key is retrieved from the security database based upon link-related information provided by the cellular and/or data network operator organization, or the organization controlling the point-of-presence; (4) the location-specific digital key is retrieved from the security database based upon location-related information provided by the cellular and/or data network operator organization, or the organization controlling the point-of-presence; (5) the operator-specific digital key is retrieved from the security database after identifying the organization that has established the data connection with the authentication apparatus; and (6) time-codes are obtained after synchronizing clocks and/or timers between the encrypting and decrypting parties.

Fig. 35 is a representative illustration of the method for providing secure data transfer according to one preferred embodiment of the present invention, in the case that a person is identified and authenticated via a mobile handset that is connected to a cellular network, as described above and illustrated in Fig. 29, and further augmented with security features as described below. In the case that the image capturing apparatus 210 transmits data to the authentication apparatus 400, the following steps are preferably taken: (1) the image capturing apparatus 210 preferably encrypts the data using the personal digital key 221, the device- specific digital key 231, and/or the present time-code 1000; (2) the image capturing apparatus 210 preferably transmits the data to the mobile handset 510 via (a) a wireless data communication link 215 between the local wireless network controller portion 211 of the image capturing apparatus 210 and the local wireless network controller portion 512 of the mobile handset 510, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a fixed- wired link 216 between a data transfer socket 212 of the image capturing apparatus 210 and a data transfer socket 513 of the mobile handset 510, in which USB and/or Fire Wire are the preferred fixed-wired communication technologies; (3) the mobile handset 510 preferably encrypts the data using the link-specific digital key 571 and/or the present time-code 1001; (4) the mobile handset 510 preferably transmits the data via the cellular network controller portion 511 of the mobile handset 510, the wireless cellular link 515, the cellular network base station apparatus 501, the cellular core network 500 and the cellular-to-data network gateway apparatus 502 to the network security apparatus 503; (5) the network security apparatus 503 preferably encrypts the data using the location- specific digital key 581, the operator-specific digital key 591 and/or the present time-code 1002; (6) the network security apparatus 503 preferably transmits the data via one or more data networks 300 to the authentication apparatus 400; (7) the authentication apparatus preferably decrypts the data using the present time-code 1002, the operator-specific digital key 591, the location-specific digital key 581, the present time-code 1001, the link-specific digital key 571, the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. In the case that the authentication apparatus 400 transmits data to the image capturing apparatus 210, the following steps are taken: (1) the authentication apparatus 400 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231 and/or present time-code 1000; (2) the authentication apparatus 400 preferably encrypts the resulting data using the link-specific digital key 571 and/or present time- code 1001; (3) the authentication apparatus 400 preferably encrypts the resulting data using the location-specific digital key 581, the operator-specific digital key 591 and/or the present

time-code 1002; (4) the authentication apparatus 400 preferably transmits the data via one or more data networks 300 to the network security apparatus 503; (5) the network security apparatus 503 preferably decrypts the data using the present time-code 1002, the operator- specific digital key 591 and/or location-specific digital key 581; (6) the network security apparatus 503 preferably transmits the data to the mobile handset 510 via the cellular-to-data network gateway apparatus 502, the cellular core network 500, the cellular network base station apparatus 501, the wireless cellular link 515, and the cellular network controller portion 511 of the mobile handset 510; (7) the mobile handset 510 preferably decrypts the data using the time-code 1001 and/or link-specific digital key 571; (8) the mobile handset 510 preferably transmits the data to the image capturing apparatus 210 via (a) a wireless data communication link 215 between the local wireless network controller portion 512 of the mobile handset 510 and local wireless network controller portion 211 of the image capturing apparatus 210, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a fixed- wired link 216 between a data transfer socket 513 of the mobile handset 510 and a data transfer socket 212 of the image capturing apparatus 210, in which USB and/or Fire Wire are the preferred fixed-wired communication technologies; and (9) the image capturing apparatus 210 preferably decrypts the data using the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. The authentication apparatus preferably obtains the aforementioned digital keys for encryption and/or decryption of data as follows: (1) the personal digital key 221 is retrieved from the security database 420 based upon on the personal identification number 220 transmitted from the image capturing apparatus; (2) the device-specific digital key 231 is retrieved from the security database 420 based upon on device serial number and/or detachable identification module number 230 transmitted from the image capturing apparatus; (3) the link-specific digital key 571 is retrieved from the security database 420 based upon an identification number 570 of the mobile handset 510, such as but not limited to the mobile subscription number of the Subscriber Identity Module (SIM) inserted into the mobile handset 510 and/or the International Mobile Equipment Identity (IMEI) of the mobile handset 510; (4) the location-specific digital key 581 is retrieved from the security database 420 based upon information 580 from the network security apparatus 503 regarding the logical address of the identification number of the cellular network base station apparatus 501; (5) the operator-specific digital key 591 is retrieved from the security database 420 based upon the logical address 590 of the network security apparatus 503; and (6) the time-code 1000 is obtained by synchronizing clocks and/or timers between the image capturing apparatus 210 and the authenticating apparatus

400, the time-code 1001 is obtained by synchronizing clocks and/or timers between the mobile handset 510 and the authenticating apparatus 400, and the time-code 1002 is obtained by synchronizing clocks and/or timers between the network security apparatus 503 and the authenticating apparatus 400.

Fig. 36 is a representative illustration of the method for providing secure data transfer according to another preferred embodiment of the present invention, in the case that a person is identified and authenticated via a desktop computer that is connected to a data network via a fixed- wired data connection, as described above and illustrated in Fig. 30, and further augmented with security features as described below. In the case that the image capturing apparatus 210 transmits data to the authentication apparatus 400, the following steps are preferably taken: (1) the image capturing apparatus 210 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231, and/or the present time-code 1000; (2) the image capturing apparatus 210 preferably transmits the data to the computer 350 via (a) a wireless data communication link 215 between the local wireless network controller portion 211 of the image capturing apparatus 210 and the local wireless network controller portion 352 of the computer 350, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a fixed- wired link 216 between a data transfer socket 212 of the image capturing apparatus 210 and a data transfer socket 353 of the computer 350, in which USB and/or Fire Wire are the preferred fixed- wired communication technologies; (3) the computer 350 preferably encrypts the data using the link-specific digital key 371 and/or the present time-code 1001; (4) the computer 350 preferably transmits the data via the fixed- wired network controller portion 351 of the computer 350 and the data network router apparatus 302 to the network security apparatus 303; (5) the network security apparatus 303 preferably encrypts the data using the location-specific digital key 381, the operator- specific digital key 391 and/or the present time-code 1002; (6) the network security apparatus 303 preferably transmits the data via one or more data networks 300 to the authentication apparatus 400; (7) the authentication apparatus preferably decrypts the data using the present time-code 1002, the operator-specific digital key 391, the location-specific digital key 381, the present time-code 1001, the link-specific digital key 371, the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. In the case that the authentication apparatus 400 transmits data to the image capturing apparatus 210, the following steps are taken: (1) the authentication apparatus 400 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231 and/or present time-code 1000; (2) the authentication apparatus 400 preferably encrypts the resulting data using the link-

specific digital key 371 and/or present time-code 1001; (3) the authentication apparatus 400 preferably encrypts the resulting data using the location-specific digital key 381, the operator-specific digital key 391 and/or the present time-code 1002; (4) the authentication apparatus 400 preferably transmits the data via one or more data networks 300 to the network security apparatus 303; (5) the network security apparatus 303 preferably decrypts the data using the present time-code 1002, the operator-specific digital key 391 and/or location-specific digital key 381; (6) the network security apparatus 303 preferably transmits the data to the computer 350 via the data network router apparatus 302 and the fixed- wired network controller portion 351 of the computer 350; (7) the computer 350 preferably decrypts the data using the time-code 1001 and/or link-specific digital key 371; (8) the computer 350 preferably transmits the data to the image capturing apparatus 210 via (a) a wireless data communication link 215 between the local wireless network controller portion 352 of the computer 350 and local wireless network controller portion 211 of the image capturing apparatus 210, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a fixed- wired link 216 between a data transfer socket 353 of the computer 350 and a data transfer socket 212 of the image capturing apparatus 210, in which USB and/or Fire- Wire are the preferred fixed- wired communication technologies; and (9) the image capturing apparatus 210 preferably decrypts the data using the present time-code 1000, the device- specific digital key 231, and/or the personal digital key 221. The authentication apparatus preferably obtains the aforementioned digital keys for encryption and/or decryption of data as follows: (1) the personal digital key 221 is retrieved from the security database 420 based upon on the personal identification number 220 transmitted from the image capturing apparatus; (2) the device-specific digital key 231 is retrieved from the security database 420 based upon on device serial number and/or detachable identification module number 230 transmitted from the image capturing apparatus; (3) the link-specific digital key 371 is retrieved from the security database 420 based upon the logical address 370 of the fixed- wired network controller portion 351 of the computer 350; (4) the location-specific digital key 381 is retrieved from the security database 420 based upon information 380 from the network security apparatus 303 regarding the logical address of the data network router apparatus 302; (5) the operator-specific digital key 391 is retrieved from the security database 420 based upon the logical address 390 of the network security apparatus 303; and (6) the time-code 1000 is obtained by synchronizing clocks and/or timers between the image capturing apparatus 210 and the authenticating apparatus 400, the time-code 1001 is obtained by synchronizing clocks and/or timers between the computer 360 and the authenticating apparatus 400, and the time-

code 1002 is obtained by synchronizing clocks and/or timers between the network security apparatus 303 and the authenticating apparatus 400.

Fig. 37 is a representative illustration of the method for providing secure data transfer according to yet another preferred embodiment of the present invention, in the case that a person is identified and authenticated via a portable computer that is connected to a wireless data network, as described above and illustrated in Fig. 31 , and further augmented with security features as described below. In the case that the image capturing apparatus 210 transmits data to the authentication apparatus 400, the following steps are preferably taken: (1) the image capturing apparatus 210 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231, and/or the present time-code 1000; (2) the image capturing apparatus 210 preferably transmits the data to the computer 360 via (a) a wireless data communication link 215 between the local wireless network controller portion 211 of the image capturing apparatus 210 and the local wireless network controller portion 362 of the computer 360, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a fixed- wired link 216 between a data transfer socket 212 of the image capturing apparatus 210 and a data transfer socket 363 of the computer 360, in which USB and/or FireWire are the preferred fixed-wired communication technologies; (3) the computer 360 preferably encrypts the data using the link-specific digital key 371 and/or then present time-code 1001; (4) the computer 360 preferably transmits the data via the wireless network controller portion 361 of the computer 360, the wireless data link 315, the data network base station apparatus 301 and the data network router apparatus 302 to the network security apparatus 303; (5) the network security apparatus 303 preferably encrypts the data using the location-specific digital key 381, the operator-specific digital key 391 and/or the present time-code 1002; (6) the network security apparatus 303 preferably transmits the data via one or more data networks 300 to the authentication apparatus 400; (7) the authentication apparatus preferably decrypts the data using the present time-code 1002, the operator- specific digital key 391, the location- specific digital key 381, the present time-code 1001 the link-specific digital key 371, the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. In the case that the authentication apparatus 400 transmits data to the image capturing apparatus 210, the following steps are taken: (1) the authentication apparatus 400 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231 and/or present time-code 1000; (2) the authentication apparatus 400 preferably encrypts the resulting data using the link-specific digital key 371 and/or present time-code 1001; (3) the authentication apparatus 400 preferably encrypts the result-

ing data using the location-specific digital key 381, the operator-specific digital key 391 and/or the present time-code 1002; (4) the authentication apparatus 400 preferably transmits the data via one or more data networks 300 to the network security apparatus 303; (5) the network security apparatus 303 preferably decrypts the data using the present time-code 1002, the operator-specific digital key 391 and/or location-specific digital key 381; (6) the network security apparatus 303 preferably transmits the data to the computer 360 via the data network router apparatus 302, the data network base station apparatus 301, the wireless data link 315, and the wireless network controller portion 361 of the computer 360; (7) the computer 360 preferably decrypts the data using the time-code 1001 and/or link-specific digital key 371; (8) the computer 360 preferably transmits the data to the image capturing apparatus 210 via (a) a wireless data communication link 215 between the local wireless network controller portion 362 of the computer 360 and local wireless network controller portion 211 of the image capturing apparatus 210, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a fixed- wired link 216 between a data transfer socket 363 of the computer 360 and a data transfer socket 212 of the image capturing apparatus 210, in which USB and Fire Wire are the preferred fixed- wired communication technologies; and (9) the image capturing apparatus 210 preferably decrypts the data using the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. The authentication apparatus preferably obtains the aforementioned digital keys for encryption and/or decryption of data as follows: (1) the personal digital key 221 is retrieved from the security database 420 based upon on the personal identification number 220 transmitted from the image capturing apparatus; (2) the device-specific digital key 231 is retrieved from the security database 420 based upon on device serial number and/or detachable identification module number 230 transmitted from the image capturing apparatus; (3) the link- specific digital key 371 is retrieved from the security database 420 based upon the logical address 370 of the wireless network controller portion 361 of the computer 360; (4) the location-specific digital key 381 is retrieved from the security database 420 based upon information 380 from the network security apparatus 303 regarding the logical address of the data network router apparatus 302 and/or the identification number of the data network base station apparatus 301; (5) the operator-specific digital key 391 is retrieved from the security database 420 based upon the logical address 390 of the network security apparatus 303; and (6) the time-code 1000 is obtained by synchronizing clocks and/or timers between the image capturing apparatus 210 and the authenticating apparatus 400, the time-code 1001 is obtained by synchronizing clocks and/or timers between the computer 360 and the authenticating ap-

paratus 400, and the time-code 1002 is obtained by synchronizing clocks and/or timers between the network security apparatus 303 and the authenticating apparatus 400.

Fig. 38 is a representative illustration of the method for providing secure data transfer according to yet another preferred embodiment of the present invention, in the case that a person is identified and authenticated via a wireless data network at either an unsupervised or a supervised physical location (hereafter "point-of-presence") as described above and illustrated in Fig. 32 and Fig. 33, and further augmented with security features as described below. In the case that the image capturing apparatus 210 transmits data to the authentication apparatus 400, the following steps are preferably taken: (1) the image capturing apparatus 210 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231, and/or the present time-code 1000; (2) the image capturing apparatus 210 preferably transmits the data to the internal access control apparatus 120 via the wireless data communication link 215 between the local wireless network controller portion 211 of the image capturing apparatus 210 and the data network base station apparatus 140 preferably using Bluetooth ® and/or UWB wireless communication technology, the data network router apparatus 130, and the internal data network 170; (3) internal access control apparatus 120 preferably encrypts the data using the location-specific digital key 181, the operator-specific digital key 191 and/or the present time-code 1001; (4) the internal access control apparatus 120 preferably transmits the data via one or more data networks 300 to the authentication apparatus 400; (5) the authentication apparatus preferably decrypts the data using the present time-code 1001, the operator-specific digital key 191, the location-specific digital key 181, the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. In the case that the authentication apparatus 400 transmits data to the image capturing apparatus 210, the following steps are taken: (1) the authentication apparatus 400 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231 and/or present time-code 1000; (2) the authentication apparatus 400 preferably encrypts the resulting data using the location-specific digital key 181, the operator-specific digital key 191 and/or the present time-code 1001; (3) the authentication apparatus 400 preferably transmits the data via one or more data networks 300 to the internal access control apparatus 120; (4) the internal access control apparatus 120 preferably decrypts the data using the present time-code 1001, the operator-specific digital key 191 and/or location-specific digital key 181; (5) the internal access control apparatus 120 preferably transmits the data to the image capturing apparatus 210 via the internal data network 170, data network router apparatus 130, the data network base station apparatus 140, wireless data communication link 215

preferably using Bluetooth ® and/or UWB wireless communication technology, and the local wireless network controller portion 211 of the image capturing apparatus 210; (6) the image capturing apparatus 210 preferably decrypts the data using the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. The authentication apparatus preferably obtains the aforementioned digital keys for encryption and/or decryption of data as follows: (1) the personal digital key 221 is retrieved from the security database 420 based upon on the personal identification number 220 transmitted from the image capturing apparatus; (2) the device-specific digital key 231 is retrieved from the security database 420 based upon on device serial number and/or detachable identification module number 230 transmitted from the image capturing apparatus; (3) the location-specific digital key 181 is retrieved from the security database 420 based upon information 180 from the internal access control apparatus 120 regarding the point-of-presence in consideration; (4) the operator- specific digital key 191 is retrieved from the security database 420 based upon the logical address 190 of the internal access control apparatus 120; and (5) the time-code 1000 is obtained by synchronizing clocks and/or timers between the image capturing apparatus 210 and the authenticating apparatus 400, and the time-code 1001 is obtained by synchronizing clocks and/or timers between the internal access control apparatus 120 and the authenticating apparatus 400.

Fig. 39 is a representative illustration of the method for providing secure data transfer according to yet another preferred embodiment of the present invention, in the case that a person is identified and authenticated by a stand-alone identification system in an isolated environment that lacks external data network and/or cellular network connectivity, as described above and illustrated in Fig. 34, and further augmented with security features as described below. In the case that the image capturing apparatus 210 transmits data to the authentication apparatus 400, the following steps are preferably taken: (1) the image capturing apparatus 210 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231, and/or the present time-code 1000; (2) the image capturing apparatus 210 preferably transmits the data to the stand-alone authentication apparatus 610 via (a) a wireless data communication link 215 between the local wireless network controller portion 211 of the image capturing apparatus 210 and the local wireless network controller portion 630 of the stand-alone authentication apparatus 610 via the antenna 631, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a secure point-to- point link 216 between a data transfer socket 212 of the image capturing apparatus 210 and the direct data transfer apparatus 620 as described above; (3) the stand-alone authentication

apparatus preferably decrypts the data using the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. In the case that the stand-alone authentication apparatus 610 transmits data to the image capturing apparatus 210, the following steps are taken: (1) the stand-alone authentication apparatus 610 preferably encrypts the data using the personal digital key 221, the device-specific digital key 231 and/or present time-code 1000; (2) the stand-alone authentication apparatus 610 preferably transmits the data to the image capturing apparatus 210 via (a) a wireless data communication link 215 between the local wireless network controller portion 630 of the stand-alone authentication apparatus 610 and local wireless network controller portion 211 of the image capturing apparatus 210 via the antenna 631, in which Bluetooth ® and/or UWB are the preferred wireless communication technologies, and/or (b) a secure point-to-point link 216 between a data transfer socket 212 of the image capturing apparatus 210 and the direct data transfer apparatus 620 as described above; and (3) the image capturing apparatus 210 preferably decrypts the data using the present time-code 1000, the device-specific digital key 231, and/or the personal digital key 221. The stand-alone authentication apparatus 610 preferably obtains the aforementioned digital keys for encryption and/or decryption of data as follows: (1) the personal digital key 221 and the device-specific digital key 231 are stored in the stand-alone authentication apparatus during the initial registration procedure as described above; and (2) the time-code 1000 is obtained by synchronizing clocks and/or timers between the image capturing apparatus 210 and the stand-alone authenticating apparatus 610.

All components shown in illustrations Fig. 29 through Fig. 39 have been portrayed in the drawings for the exclusive purpose of demonstrating the functionality and conceptual components of certain preferred embodiments of the disclosed identification method and system, in particular the data transfer and data security sequences and mechanisms. Therefore the presented connectivity between components, as well as the order and quantity of components, is strictly exemplary and laid out as such for the sole purpose of demonstration of concept. In particular, various components, such as but not limited to the authentication apparatus 400, the authentication database 410 and the security database 420, may be combined into a single or two physical components, or replicated to form a greater quantity of components, e.g. for robustness and/or load-sharing purposes. Auxiliary components unrelated to the core concept of the present invention, both mechanical and electrical, are for the sake of clarity omitted from the drawings. In particular, the exemplary use cases of the identification system shown in Fig. 29 through Fig. 34 reflect the state of modern communication technology. Similar use cases for the identification system are possible for future communication

infrastructures yet to be discovered. Regarding the method for secure data transfer, any or all encryption steps and corresponding decryption steps may be omitted if necessary, e.g. in order to conserve computation load and/or transmission bandwidth. In addition, even in the case that one or more encryption/decryption steps are omitted in such portions of the communication infrastructure that lack adequate support for encryption and/or decryption, encryption and decryption steps in other portions of the communication infrastructure with adequate encryption/decryption support are possible and undisrupted by the absence of other encryption/decryption steps, and various properties of the encryption/decryption steps, including but not limited to the encryption/decryption algorithm, and/or the lengths and/or qualities of the encryption/decryption keys, may be varied in order to compensate for the absence of encryption/decryption steps in other portions of the communication infrastructure. Due to the aforementioned reasons, many other embodiments of the identification system and/or arrangements of the components of the above disclosed embodiments of the present invention are possible.

Although each of the aforementioned preferred embodiments of the present invention is disclosed individually, any combination of one or more of said embodiments is possible within the scope of the present invention.

Disclosure of Certain Embodiments of the Present Invention Further Comprising a Management System

Certain embodiments of the present invention may further comprise a management system as follows: (1) a user preferably initially registers himself or herself into said management system by creating a user profile at the electronic portal of said management system, logging into the private area of said electronic portal using his or her user credentials, and identifying himself or herself according to the disclosed identification method; (2) said user preferably then identifies himself or herself according to the disclosed identification method at one or more physical locations equipped with a point-of-presence identification system as described above, such as but not limited to governmental offices, immigration checkpoints, banks, and/or retail stores, and presents valid physical proof of his or her true identity, e.g. a passport, an identification card and/or a driver's license, to one or more representatives of the organizations) monitoring the identification procedure; (3) said management system registers that said user has proven his or her true identity, after receiving confirmation from the organizations) at which identification and proof of identity has taken place; (4) said user may now identify himself or herself at such organizations that have confirmed the identity of said

user, e.g. at governmental offices, immigration checkpoints, banks, and/or retail shopping chains; (5) organizations may further agree to share identification information, e.g. if the identity of said user has been confirmed at a governmental office, a bank or retail shopping chain may also accept as valid proof of identity said user identifying himself or herself according to the disclosed identification method without said user first presenting proof of his or her true identity to any representative(s) of said organization(s); (6) when said user has logged into the private area of a third-party electronic portal supporting the disclosed identification method first by a secure means provided by said third party, such as but not limited to secret digital keys known only by said user and said third party, said user may then identify himself or herself according to the disclosed identification method, after which said user in the future is allowed to log into the private area of said third-party electronic portal according to the disclosed identification method; (7) said user may log into said private area of said electronic portal of said management system using the disclosed identification method and/or his or her user credentials, after which said user may monitor and modify his or her user profile comprising but not limited to information on the organizations that will accept as valid proof of identity said user identifying himself or herself according to the disclosed identification method, as well as a log of prior identification events.

INDUSTRIAL APPLICABILITY

Various applications exist for embodiments of the identification system, such as but not limited to the areas of security, military, administration, healthcare and commerce. Furthermore, the aforementioned stand-alone system isolated from external data and/or cellular networks also enables certain distinct applications.

Security applications include protection of people and infrastructure by controlling and restricting access to premises, regions, events and information. Access to high-security facilities, such as but not limited to nuclear power plants and military, governmental and industrial complexes, can be granted to authorized personnel only, and an authorized person opening doors, operating elevators, etc. by means of the disclosed identification method can be monitored and registered as he or she travels within the premises. Similar mechanisms can be used to monitor, register and control movements of employees at any workplace, and further be used to keep account of employees abiding by agreed working hours. Persons traveling between and within nations can be identified, registered and permitted or denied permission to proceed through border checkpoints. In volatile and/or hostile regions, persons can be identified at a distance from a supervised checkpoint, thus protecting the supervising person-

nel from persons carrying firearms, explosive devices and/or other potentially harmful items and/or hostile actions by persons subject to identification. The present invention can also be used to monitor and control crowds in public events and premises, such as but not limited to sports events, festivals, concerts, theaters and night clubs, in which access to the event can be denied from persons known for misconduct at prior events and/or persons known to or suspected to pose a threat to the general public. Similarly undesired or potentially dangerous passengers can be denied access to public transport vehicles, such as but not limited to buses, trains, trams, maritime vessels and aircraft. In particular, the present invention can accelerate check-in and border control procedures at airports, which can be carried out as follows: (1) the person enters the departure lounge and approaches a check-in desk; (2) the person identifies himself or herself using the disclosed identification method at the desk; (3) the airline reservation system confirms that the person has checked in, the appropriate baggage tickets are printed and attached, and airline seating is optionally selected; (4) the person passes through the passport control checkpoint while again identifying himself or herself using the disclosed identification method; and (5) the person enters the security control checkpoint and proceeds as normally. Furthermore, the safety of security guards can be enhanced by continuously identifying the security guard using an image capturing apparatus, and in case of emergency the security guard may trigger an noticeable or silent alarm according to the disclosed alarm mechanism. The safety of security guards who circulate between a number of locations can further be increased when the guard's image capturing apparatus communicates with a monitoring party via a cellular handset and/or wireless data network, and the guard's image capturing apparatus and/or cellular handset is equipped with geographical positioning capabilities, in which case the exact movements of the guard can be tracked and appropriate actions taken in the case that the guard at any time ceases to identify himself or herself using the disclosed identification method and/or triggers an alarm using the disclosed alarm mechanism. Emergency personnel, such as but not limited to police, fire brigade and medical first-response personnel, can rapidly enter premises that are equipped with doors and/or windows that can be unlocked using the disclosed identification procedure. Finally, data security of portable and/or stationary electronic apparatus can be enhanced by requiring the user of these to identify himself or herself using the disclosed identification method prior to accessing information stored within the electronic apparatus and/or using communication features of the electronic apparatus.

Military personnel carrying an image capturing apparatus may be recognized in various environments, including but not limited to hostile territory and/or battlefields, by means of the

disclosed identification method. In this case either (a) in appropriate situations a member of the personnel is requested to identify himself or herself by the authenticating party, after which the member identifies himself or herself using his or her image capturing apparatus according to the disclosed identification method, and/or (b) an image capturing apparatus is fixed in front of either or both of the eyes of the member, in which case the member is constantly identified by the authenticating party, as long as the member resides in range of a wireless communication network of the authenticating party. Particularly in the latter case, it may be necessary and/or beneficial to augment the image capturing apparatus with one or more camera components that continuously capture a view of the surroundings in the vicinity of the member, and one or more display components that continuously project the captured images and/or possible auxiliary information to the eye(s) of the member. The present invention may be used for various tactical and/or strategic purposes, such as but not limited to (1) tracking friendly military personnel, especially when the image capturing apparatus is further equipped with geographical positioning capabilities; (2) providing automated weapons systems, such as but not limited to firearms with automated target detection and aiming capabilities, with information on whether detected targets are friendly personnel or unknown; (3) controlling access to strategic weapons systems, such as but not limited to missile launching and guidance systems; and (4) deactivating mines or the like when friendly military personnel are in the vicinity of these.

Administration-related applications include but are not limited to identification of persons when (1) resolving welfare, social security, taxation issues, etc. at public sector offices; and (2) granting residence permits, employment permits, visas, passports, driver's licenses, etc., both cases in which confirming the true identity of the visitor is crucial in order to avoid the issuance of unsubstantiated claims, benefits and/or documents. The identification system can further be used in spot checks at workplaces to identify illegal immigrants and/or employees without valid work permits. The identification system may further be used by police officials to verify the identity of suspects, e.g. after a driver has been stopped by a police patrol. Persons may also be identified prior to and/or after participating in any type of official examination or competition, including but not limited to examinations at school, academic examinations, professional skill examinations, driving examinations, aviation examinations etc., and various sports events.

Healthcare applications include but are not limited to identification of patients at hospitals and by first-response personnel, including those unable to communicate with medical staff,

in order to access information in patient records; identification of and tracking newly-born infants; and identification of persons purchasing medication in order to access prescriptions from an electronic database.

Commerce applications include but are not limited to: (1) purchasing goods and/or services using the disclosed identification method at (a) supervised physical locations, such as retail stores, restaurants, manned gas stations, etc. and/or (b) unsupervised sales points, such as automated vending machines, unmanned gas stations, etc.; (2) purchasing entry fees using the disclosed identification method to various events, such as concerts, theaters, cinemas, sports events, festivals, etc.; (3) purchasing travel fares using the disclosed identification method to public transport vehicles, such as buses, trains, trams, maritime vessels, aircraft, as well as ticket inspectors using the disclosed identification method to confirm payment of valid travel fare by passengers in such vehicles; (4) purchasing access using the disclosed identification method to toll roads, toll tunnels, toll bridges, etc. while driving a vehicle; (5) paying for transportation using the disclosed identification method in a taxi, limousine, etc.; (6) withdrawing cash funds from an Automated Teller Machine (ATM) using the disclosed identification method; (7) logging into an electronic portal via one or more data networks and/or cellular networks using the disclosed identification method; (8) paying for goods and/or services using the disclosed identification method when logged into one or more electronic portals; (9) transferring funds using the disclosed identification method when logged into one or more electronic portals; (10) submitting digital content to one or more electronic portals after being identified by the disclosed identification method; and (11) participants of a customer loyalty programs of retail store chains, hotels, airlines, etc. and/or VIP customers of night clubs, restaurants, etc. (hereafter "special customers") being granted specific privileges after being identified by the disclosed identification method, for example: special customers being charged reduced rates at a retail store, airline booking service, hotel, etc.; special customers being granted free entrance to a night club, sports event, concert, etc.; and/or special customers being permitted to bypass a queue at an airline check-in desk, at the entrance of a sports event, restaurant, night club, etc.

Applications for a stand-alone non-networked system include but are not limited to: (1) unlocking doors and/or windows of a non-networked home, and/or unlocking and opening an automatic garage door, using the disclosed identification method; and (2) unlocking doors and allowing ignition of a vehicle, such as an automobile, motorcycle, motorboat, etc., using the disclosed identification method.

Those skilled in the art will understand that the aforementioned applications are disclosed for exemplary purposes only and do not as such in any manner limit the scope of possible applications for the present invention, which is significantly broader than the scope of the aforementioned examples.

Many other modifications and variations of the present invention are possible in light of the above teachings. Thus, it is to be understood that, within the scope of the appended claims, the invention may be practiced otherwise than as described hereinabove.

REFERENCES

[I] US 4641349 (FLOM, L. and SAFIR, A.) February 3, 1987.

[2] US 5291560 A (DAUGMAN, J.G.) March 1, 1994.

[3] WILDES, R.P, ASMUTH, J.C., GREEN, G.L., HSU, S.C., KOLCZYNSKI, RJ., MATEY, J.R. and MCBRIDE, S. E. 'A System for Automated Iris Recognition.' In: Second IEEE Workshop on Applications of Computer Vision, 5-7 December, ISBN 0-8186-6410-X, 1994, p. 121-128.

[4] US 5572596 A, (WILDES, R.P., ASMUTH, J.C., HANNA, KJ., HSU, S.C., KOLCZYNSKI, RJ., MATEY, J.R. and MCBRIDE, S.E.) November 5, 1996.

[5] US 5956122 A (DOSTER, R.) September 21, 1999.

[6] KR 2001-0006976 (KIM, D.H., PARK, J. Y. and RYU, J.S.) January 26, 2001.

[7] US 2005/0281440 Al, (PEMER, F. A.) December 22, 2005.

[8] JP 2004-220376 A (NAKAZAWA, T. and ICHIHASHI, Y.) August 5, 2004.

[9] US 2006/0274919 Al (LOIACONO, D. and MATEY, J.R.) December 7, 2006.

[10] KR 10-2005-0009959 A (BAE, K.H., KIM, J.H. and PAPK, K.R.) January 26, 2005.

[II] JP 2007-11710 A (TAKAHASHI 5 Y.) January 18, 2007.

[12] WO 2006/088042 Al (TSUKAHARA, S.) August 24, 2006.

[13] US 2003/0012413 Al (KUSAKARI, T. and WAKIYAMA K.) January 16, 2003.

[14] JP 2000-33080 A (ODA, T.) February 2, 2000.

[15] WO 2005/008590 Al (AZUMA, T., KONDO, K. and WAKAMORI, M.) January 27, 2005.

[16] US 2005/0249385 Al (KONDO, K., AZUMA, T. and WAKAMORI, M.) November 10, 2005.

[17] WO 2005/109344 Al (KONDO, K., AZUMA, T. and WAKAMORI M.) November 17, 2005.

[18] FR 2884947 Al (COTTARD 5 M. and MONTEILLIET, G.) October 27, 2006.

[19] SON, B., CHA, S.-H. and LEE 3 Y. 'Multifocus Image Sequences for Iris Recognition.' In: PSIVT 2006, LNCS 4319, Edited by Chang, L.-W., Lie, W.-N., and Chiang, R, Springer-Verlag Berlin Heidelberg, 2006, p. 411-420.

[20] US 2003/0152252 Al (KONDO, K., AZUMA, T., UOMORI 5 K. and AOKI, Y.) August 14, 2003.

[21] JP 2003-108983 A (KUSAKARI 5 T., WADA, J. and KANEKO 5 T.), April 11, 2003.

[22] US 2006/0120707 Al (KUSAKARI, T., WADA, J. and KANEKO, T.) June 8, 2006.

[23] JP 2005-157970 A (FUHMATSU, T. and AOKI 5 Y.) June 16, 2005.

[24] US 6850631 Bl (ODA, T. and OHTA, Y.) February 1, 2005.

[25] CA 2350309 Al (CAMBIER 5 J.L. and MUSGRAVE, C.) June 2, 2000.

[26] WO 00/30525 A2 (MCHUGH, J.T., LEE, J.H. and KUHLA, CB.) June 2, 2000.

[27] WO 01/20561 Al (CAMBIER 5 J.L. and SIEDLARZ 5 J.E.) March 22, 2001.

[28] US 6333988 Bl (SEAL 5 C.H., GIFFORD, M.M. and MCCARTNEY, DJ.) December 25, 2001.

[29] JP 2006-85226 A (TACHIBANA, M.) March 30, 2006.

[30] US 2005/0129286 Al (HEKIMIAN 5 CD.) June 16, 2005.

[31] JP 2006-158827 A (WAKAMORI, M.) June 22, 2006.

[32] JP 2006-181012 A (WAKAMORI 5 M. and MORITA 5 K.) July 13 5 2006.

[33] WO 2006/052004 Al (TSUKAHARA 5 S.) May 18, 2006.

[34] US 4109237 (HILL, R.B.) August 22, 1978.

[35] US 4393366 (HILL 5 R.B.) July 12, 1983.

[36] US 4620318 (HILL 5 R.B.) October 28 5 1986.

[37] US 6453057 Bl (MARSHALL 3 J. and USHER 5 D.) September 17, 2002.

[38] US 6757409 B2 (MARSHALL, J. and USHER, D.) June 29, 2004.

[39] US 2002/0093645 Al (HEACOCK, G.L.) July 18, 2002.

[40] WO 02/075639 Al (HEACOCK, G.L. and MULLER, D.F.) September 26, 2002.

[41] WO 02/07068 Al (BEGHUIN, D., CHEVALIER, P., DEVENYN, D., NACHTER- GAELE, K. and WISLEZ, J.-M.) January 24, 2002.

[42] US 2004/0202354 Al (TOGINO, T.) October 14, 2004.

[43] US 2006/0147095 Al (USHER, D.B., HEACOCK, G.L., MARSHALL, J. and MUELLER, D.) July 6, 2006.

[44] US 2006/0088193 Al (MULLER, D.F., HEACOCK, G.L. and USHER, D.B.) April 27, 2006.

[45] US 2005/0117782 Al (IMAOKA T., WADA, J. and SASAKI, T.) June 2, 2005.

[46] JP 2006-350410 A (AOKI, Y.) December 28, 2006.

[47] JP 2005-304809 A (IKOMA, M. and NAKAIGAWA T.) November 4, 2005.

[48] KASPROWSKI, P. and OBER, J. 'Eye Movements in Biometrics.' In: BioAW 2004, LNCS 3087, Edited by D. Maltoni and A. K. Jain, Springer- Verlag, Berlin Heidelberg, 2004, p. 248-258.

[49] WO 01/88857 Al (LAUPER, E. and HUBER A.) November 22, 2001.

[50] WO 2006/119425 A2 (DERAKHSHANI, R. and ROSS, A.) December 14, 2006.

[51] ZUO, J., SCHMID, N. A. and CHEN, X. 'On Performance Comparison Of Real And Synthetic Iris Images.' In ICIP 2006, Atlanta, GA, USA, October 8-11, 2006, p. 305- 308.

[52] US 2006/0133651 Al (POLCHA, AJ. and POLCHA, M.P.) June 22, 2006.

[53] US 2006/0072793 Al (DETERMAN G.E.) April 6, 2006.