Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DYNAMIC IMAGE RECOGNITION SYSTEM FOR SECURITY AND TELEMEDICINE
Document Type and Number:
WIPO Patent Application WO/2019/014521
Kind Code:
A1
Abstract:
A dynamic imaging system is disclosed herein. The dynamic imaging system includes an imaging device configured to capture images of a body portion of a person so that a displacement of the body portion of the person is capable of being tracked; and a data processing device coupled to the imaging device, and being programmed to determine the displacement of the body portion of the person using the captured images, and to compare the displacement of the body portion of the person to a reference displacement of the body portion of the person acquired prior to the displacement so that dynamic changes in the body portion of the person are capable of being assessed for identifying the person or evaluating physical and physiological changes in the body portion. The dynamic imaging system may be a standalone system or provided as a part of a telemedicine system.

Inventors:
PEYMAN GHOLAM (US)
Application Number:
PCT/US2018/041958
Publication Date:
January 17, 2019
Filing Date:
July 13, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PEYMAN GHOLAM A (US)
International Classes:
A61B5/00; G06V10/147; G10L15/00; G10L15/01; G10L15/02
Foreign References:
US20080294013A12008-11-27
US20100265498A12010-10-21
US4757541A1988-07-12
US20030072479A12003-04-17
US20120114195A12012-05-10
US20130222684A12013-08-29
US20140254939A12014-09-11
US20080205703A12008-08-28
US20160101358A12016-04-14
Attorney, Agent or Firm:
O'REILLY, Patrick, Francis, III (US)
Download PDF:
Claims:
CLAIMS

1. A dynamic imaging system, comprising:

an imaging device configured to capture images of a body portion of a person over a predetermined duration of time so that a displacement of said body portion of said person is capable of being tracked during said predetermined duration of time; and

a data processing device operatively coupled to said imaging device, said data processing device being specially programmed to determine said displacement of said body portion of said person over said predetermined duration of time using said captured images, and to compare said displacement of said body portion of said person over said predetermined duration of time to a reference displacement of said body portion of said person acquired prior to said displacement so that dynamic changes in said body portion of said person are capable of being assessed for the purpose of identifying said person or evaluating physical and physiological changes in said body portion.

2. The dynamic imaging system according to claim 1 , wherein said imaging device is in the form of a light field camera, said light field camera including a sensor array, a microlens array disposed in front of said sensor array, and an objective lens disposed in front of said microlens array.

3. The dynamic imaging system according to claim 2, wherein said objective lens of said light field camera is in the form of a tunable lens.

4. The dynamic imaging system according to claim 1 , wherein said data processing device utilizes at least two points for the comparison of said displacement of said body portion of said person over said predetermined duration of time to said reference displacement of said body portion of said person in a two-dimensional or three-dimensional manner.

5. The dynamic imaging system according to claim 1, wherein said dynamic imaging system is in the form of an independent, standalone system configured to verify the identity of said person for an application selected from the group consisting: (i) security system identification, (ii) identification by a customs department or state department, (iii) identification by a police department or military branch, (iv) identification at an airport, (v) identification at a banking institution, (vi) identification at a stadium hosting a sporting event, concert, or political event, ? (vii) identification for use in a smartphone application or a drone application, and

8 (viii) identification of a body's lesion in a two-dimensional or three-dimensional manner. ί 6. The dynamic imaging system according to claim 1, wherein said dynamic imaging system

2 is in the form of a dynamic facial recognition system, and wherein said body portion of said

3 person for which said displacement is determined comprises a portion of the face of said person

4 imaged in a two-dimensional or three-dimensional manner, and wherein the dynamic imaging

5 system is specially programmed to analyze induced changes in the portion of the face of the

6 person over the predetermined duration of time. ί 7. The dynamic imaging system according to claim 1, wherein said dynamic imaging system

2 is provided as part of a telemedicine system, said dynamic imaging system configured to verify

3 the identity of a patient prior to any medical history taken or any recommendation and/or

4 advice given. f 8. The dynamic imaging system according to claim 1, further comprising a voice recognition

2 sensor configured to capture speech waveforms generated by said person so that said speech

3 waveforms are capable of being superimposed on a displacement curve of said body portion of

4 said person generated from said captured images acquired using said imaging device, thereby

5 enabling both audial and visual attributes of said person to be taken into account for

6 identification purposes.

1 9. The dynamic imaging system according to claim 8, wherein said displacement curve of

2 said body portion of said person comprises a displacement curve for the lips of said person

3 while said person is reciting a series of vowels.

1 10. The dynamic imaging system according to claim 1, wherein said body portion of said

2 person being tracked by said imaging device comprises a tumor or a lesion, and wherein said

3 dynamic imaging system is configured to dynamically analyze the growth of and/or changes in

4 said tumor or lesion over a period of time by tracking said displacement of said tumor or lesion

5 and/or said changes in said tumor or lesion in a two-dimensional or three-dimensional manner

6 over said period of time.

11. The dynamic imaging system according to claim 10, wherein said data processing device is specially programmed to track volumetric or surface changes in said tumor or lesion over said period of time so that said tracked volumetric or surface changes in said tumor or lesion are capable of being compared with existing patient data or baseline data for diagnosis or differential diagnosis so as to analyze trends in disease progression or disease improvement.

12. The dynamic imaging system according to claim 1, wherein said body portion of said person being tracked by said imaging device is being affected by a disease process, and wherein said dynamic imaging system is configured to track changes in said disease process in a two-dimensional or three-dimensional manner over a period of time.

13. The dynamic imaging system according to claim 1, wherein said imaging device is in the form of one or more multispectral or hyperspectral cameras configured to capture multispectral or hyperspectral images of said body portion of said person and changes in the body portion over time so that surface features and subsurface features of said body portion of said person are capable of being analyzed.

14. The dynamic imaging system according to claim 13, wherein said one or more multispectral or hyperspectral cameras are configured to capture multispectral or hyperspectral images of said body portion in the infrared spectrum so that a temperature of said body portion is capable of being determined in a two-dimensional or three-dimensional format.

15. The dynamic imaging system according to claim 13, wherein said body portion of said person is in the form of a finger or a hand of said person, wherein said one or more multispectral or hyperspectral cameras are configured to capture multispectral images of said finger or said hand of said person in an initial uncompressed state and a subsequent compressed state, and wherein said data processing device is specially programmed to verify an identity of said person using said multispectral images of said finger or said hand of said person in both said compressed and uncompressed states by taking into account ridges and/or folds on a surface of said finger or said hand of said person and subsurface blood flow through both compressed and uncompressed capillaries in said finger or said hand of said person in a two-dimensional or three-dimensional manner.

16. The dynamic imaging system according to claim 15, further comprising, in addition to said one or more multispectral or hyperspectral cameras, two or more additional cameras surrounding said finger or said hand of said person so as to allow a 360 degree image of said finger or said hand to be constructed by said dynamic imaging system.

17. The dynamic imaging system according to claim 1, wherein said data processing device is specially programmed to project a grid of points over an area of said body portion of said person so as to determine said displacement of said body portion of said person over said predetermined duration of time in a two-dimensional or three-dimensional manner.

18. The dynamic imaging system according to claim 1, wherein said data processing device is specially programmed to execute a subtraction algorithm for comparing a displacement of a subtracted image of said body portion of said person over said predetermined duration of time to a reference subtracted image of said body portion of said person acquired prior to said displacement in a two-dimensional or three-dimensional manner.

19. The dynamic imaging system according to claim 1, wherein said dynamic imaging system is in the form of a dynamic facial recognition system, and said body portion of said person for which said displacement is determined comprises a portion of the face of said person;

wherein said dynamic imaging system further comprises a voice recognition sensor configured to capture speech waveforms generated by said person so that said speech waveforms are capable of being superimposed on a displacement curve of said portion of the face of said person generated from said captured images acquired using said dynamic facial recognition system;

wherein said dynamic imaging system further comprises a fingerprint or hand sensor configured to capture multispectral images of a finger or hand of said person, said data processing device being specially programmed to determine surface features and subsurface features of said finger or hand of said person using said multispectral images; and

wherein an identify of said person is verified using identity comparison results generated from said dynamic facial recognition system, said voice recognition sensor, and said fingerprint or hand sensor so as to verify said identity of said person with certainty.

20. The dynamic imaging system according to claim 1, wherein said body portion of said person being tracked by said imaging device comprises a retina, a limb, or other part of the body involving a physiological function, and wherein said dynamic imaging system is configured to dynamically analyze changes in the retina, the limb, or the other part of the body over a period of time by tracking said changes in the retina, the limb, or the other part of the body in a two-dimensional or three-dimensional manner over said period of time.

21. The dynamic imaging system according to claim 1, wherein said data processing device of said dynamic imaging system is operatively coupled to an artificial intelligence system and/or augmented intelligence system by means of an internet-based network connection so as to allow a user to access information contained on the artificial intelligence system and/or the augmented intelligence system.

22. The dynamic imaging system according to claim 1, wherein said data processing device of said dynamic imaging system is operatively coupled to a virtual reality system, an augmented reality system, or other video-based system so that a user is able to view said captured images via a live feed in real-time.

23. The dynamic imaging system according to claim 22, wherein said virtual reality system, said augmented reality system, or said other video-based system is configured so as to enable said user to zoom in and out on said captured images of said body portion of said person.

Description:
TITLE OF THE INVENTION

DYNAMIC IMAGE RECOGNITION SYSTEM FOR SECURITY AND TELEMEDICINE

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This patent application claims priority to U.S. Provisional Patent Application No. 62/532,098, entitled "Dynamic Imaging System and a Remote Laser Treatment System Using the Same", filed on July 13, 2017; U.S. Provisional Application No. 62/549,941, entitled "Dynamic Imaging System and a Remote Laser Treatment System Using the Same", filed on August 24, 2017; U.S. Provisional Application No. 62/563,582, entitled "Dynamic Imaging System and a Remote Laser Treatment System Using the Same", filed on September 26, 2017; and U.S. Provisional Patent Application No. 62/671,525, entitled "Dynamic Image Recognition System For Security And Telemedicine", filed on May 15, 2018; the disclosure of each of which is hereby incorporated by reference as if set forth in their entirety herein.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[0002] Not Applicable.

NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT

[0003] Not Applicable.

INCORPORATION BY REFERENCE OF MATERIAL SUBMITTED ON A COMPACT DISK

[0004] Not Applicable.

BACKGROUND OF THE INVENTION

1. Field of the Invention

[0005] The invention generally relates to a dynamic image recognition system. More particularly, the invention relates to a dynamic image recognition system that may be utilized in a remote recognition of a person and in telemedicine, and which is capable of determining induced dynamic changes in a body portion of a person for identifying the person, which may include any dynamic change occurring after the initial static photo, or evaluating the physical and physiological changes in the body portion to obtain additional data and recognize the changes by subtracting them to achieve a third form of information or third image. 2. Background

[0006] Conventional person identification systems are known that are used to verify the identity of a person. However, these conventional person identification systems are not able to take into account the dynamic characteristics of a body portion of a person for recognition. As such, conventional person identification systems are limited in their ability to accurately verify the identity of a person.

[0007] Similarly, in the medical industry, conventional patient identification systems are known that are used to verify the identity of a patient. However, these conventional patient identification systems are not able to take into account the dynamic characteristics of a body portion of a patient for recognition. As such, like conventional person identification systems in general, conventional patient identification systems are limited in their ability to accurately verify the identity of a patient. Also, conventional patient identification systems are unable to be utilized for other important applications, such as the analysis of a disease process.

[0008] In some known applications, telemedicine is used to triage an accident victim to appropriate specialist via the Internet with mobile systems or situations to avoid spread of a contagious disease or video conferencing. Other existing systems include telecardiology, teledermatology, telepathology, teleophthalmology, and teleradiology, etc. transmitting radiographic images X-ray, CT, MR, PET, CT, MRI , SPECT/CT and health information technology. Presently available social communication systems such as MSN, Yahoo, Skype are not HIPAA approved.

[0009] Therefore, it is apparent that a need exists for a dynamic imaging system that more accurately verifies the identity of a patient or other person, enhances certain physical characteristics, and is capable of being used for other important applications, such as tracking and analyzing trends in a disease process or any other changes. Moreover, it is apparent that a need exists for a dynamic image recognition system that utilizes a dynamic imaging system for more accurately verifying the identity of a patient or other person in a real-time in order to ensure that the interview or advice is being performed on the proper patient/person and the patient's privacy cannot be violated by a hacker, etc. Furthermore, it is apparent that a need exists for a dynamic imaging system that far more accurately verifies the identity of a patient or other than a typical identification card (e.g., a driver's license, a photo, or a computer image etc.). Also, there is a need to maintain the privacy and/or confidentiality of the information of a patient or other person with a secure system that cannot be hacked. BRIEF SUMMARY OF EMBODIMENTS OF THE INVENTION

[0010] Accordingly, the present invention is directed to a dynamic image recognition system and a telesystem using the same that substantially obviates one or more problems resulting from the limitations and deficiencies of the related art.

[0011] In accordance with one or more embodiments of the present invention, there is provided a dynamic imaging system that includes an imaging device configured to capture images of a body portion of a person over a predetermined duration of time so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time; and a data processing device operatively coupled to the imaging device, the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time using the captured images, and to compare the displacement of the body portion of the person over the predetermined duration of time to a reference displacement of the body portion of the person acquired prior to the displacement so that dynamic changes in the body portion of the person are capable of being assessed for the purpose of identifying the person or evaluating physical and physiological changes in the body portion.

[0012] In a further embodiment of the present invention, the imaging device is in the form of a light field camera, the light field camera including a sensor array, a microlens array disposed in front of the sensor array, and an objective lens disposed in front of the microlens array.

[0013] In yet a further embodiment, the objective lens of the light field camera is in the form of a tunable lens.

[0014] In still a further embodiment, the objective lens of the light field camera is in the form of a fluidic lens, the fluidic lens having an outer housing and a flexible membrane supported within the outer housing, the flexible membrane at least partially defining a chamber that receives a fluid therein.

[0015] In yet a further embodiment, the dynamic imaging system further comprises a fluid control system operatively coupled to the fluidic lens, the fluid control system configured to insert an amount of the fluid into the chamber of the fluidic lens, or remove an amount of the fluid from the chamber of the fluidic lens, in order to change the shape of the fluidic lens in accordance with the amount of fluid therein.

[0016] In still a further embodiment, the fluid control system comprises a pump and one or more fluid distribution lines, at least one of the one or more fluid distribution lines fluidly coupling the pump to the fluidic lens so that the pump is capable of adjusting concavity and/or convexity of the fluidic lens.

[0017] In yet a further embodiment, the fluidic lens further comprises a magnetically actuated subsystem or servomotor configured to selectively deform the flexible membrane so as to increase or decrease the convexity of the flexible membrane of the fluidic lens.

[0018] In still a further embodiment, the microlens array and the sensor array are disposed in a concave configuration in a rear of the light field camera relative to the incoming light rays passing through the objective lens so to enable the data processing device to

mathematically create a two-dimensional or three-dimensional image of the incoming light rays.

[0019] In yet a further embodiment, the data processing device utilizes at least two points for the comparison of the displacement of the body portion of the person over the

predetermined duration of time to the reference displacement of the body portion of the person in a two-dimensional or three-dimensional manner.

[0020] In still a further embodiment, the dynamic imaging system is in the form of an independent, standalone system configured to verify the identity of the person for an application selected from the group consisting: (i) security system identification,

(ii) identification by a customs department or state department, (iii) identification by a police department or military branch, (iv) identification at an airport, (v) identification at a banking institution, (vi) identification at a stadium hosting a sporting event, concert, or political event, (vii) identification for use in a smartphone application or a drone application, and (viii) identification of a body's lesion in a two-dimensional or three-dimensional manner.

[0021] In yet another embodiment, the person being imaged or photographed is an active participant in the process that implies the person in giving his consent by participating in the process and his photograph is not taken without his permission randomly by one or more cameras located in a location.

[0022] In still a further embodiment, the dynamic imaging system is in the form of a dynamic facial recognition system, and wherein the body portion of the person for which the displacement is determined comprises a portion of the face of the person imaged in a two- dimensional or three-dimensional manner, and wherein the dynamic imaging system is specially programmed to analyze induced changes in the portion of the face of the person over the predetermined duration of time (e.g., induced changes in the portion of the face of the person that enhance wrinkles of the face by the person being instructed to frown or smile, thus displacing the wrinkles for the predetermined duration of time).

[0023] In yet a further embodiment, the dynamic imaging system is provided as part of a telemedicine system, the dynamic imaging system configured to verify the identity of a patient prior to any medical history taken or any recommendation and/or advice given. The dynamic imaging system may also be provided with an additional scanning device for imaging a specific part of the body (e.g., the configuration of the retinal vessels as static image).

[0024] In still a further embodiment, the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person (e.g., face or mouth of the person) generated from the captured images acquired using the imaging device, thereby enabling both audial and visual attributes of the person to be taken into account for identification purposes.

[0025] In yet a further embodiment, the dynamic imaging system further comprises a voice recognition sensor configured to capture speech, or sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person generated from the captured images acquired using the imaging device, thereby enabling both audial and visual attributes of the person to be taken into account for identification purposes and the process may be repeated and recorded for absolute security using the above combinations with other gesture and sound created by the same person.

[0026] In still a further embodiment, the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person generated from the captured images acquired using the imaging device, and compare the acquired data with similar data obtained from the same person previously, and analyze changes that might have been induced because of intoxication (e.g., resulting from alcohol or another type of substance abuse, such as heroin, etc.), or resulting from changes in the mood of the person).

[0027] In yet a further embodiment, the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body/mouth portion of the person generated from the captured images acquired using the imaging device, and compared with similar data obtained or modified as a result of induced emotional changes.

[0028] In still a further embodiment, the dynamic imaging system further comprises a voice recognition sensor configured to capture speech sound waves generated by the person so that the speech sound waves are capable of being superimposed on a displacement curve of the body portion of the person (e.g., the mouth, etc.) generated from the captured images acquired using the imaging device (e.g., one or more hyperspectral or multispectral cameras), and compared with similar data obtained or modified as a result of induced emotional changes and analyzed with a subtraction algorithm to observe those changes or enhance the results to predict their progress.

[0029] In yet a further embodiment, the displacement curve of the body portion of the person comprises a displacement curve for the lips of the person while the person is reciting a series of vowels.

[0030] In still a further embodiment, the body portion of the person being tracked by the imaging device comprises a tumor, lesion, or retina, and wherein the dynamic imaging system is configured to dynamically analyze the growth of the tumor or lesion over a period of time by tracking the displacement of the tumor or lesion in a two-dimensional or three- dimensional manner over the period of time.

[0031] In yet a further embodiment, the data processing device is specially programmed to track volumetric or surface changes in the tumor, lesion, or retina over the period of time so that the tracked volumetric or surface changes over a period of time in the tumor, lesion, or retina , or on the mucosa or the skin of the patient are capable of being compared with existing patient data or baseline data for diagnosis or differential diagnosis so as to analyze and predict trends in disease progression or disease improvement.

[0032] In still a further embodiment, the body portion of the person being tracked by the imaging device is being affected by a disease process, and wherein the dynamic imaging system is configured to track changes in the disease process in a two-dimensional or three- dimensional manner over a period of time.

[0033] In yet a further embodiment, the imaging device is in the form of one or more multispectral or hyperspectral cameras configured to capture multispectral or hyperspectral images of the body portion of the person and changes in the body portion over time so that surface features and subsurface features of the body portion of the person are enhanced (e.g., enhancement of wrinkles, etc.) and are capable of being analyzed.

[0034] In still a further embodiment, the one or more multispectral or hyperspectral cameras are configured to capture multispectral or hyperspectral images of the body portion in the infrared spectrum so that a temperature of the body portion is capable of being determined during the dynamic changes in a two-dimensional or three-dimensional format.

[0035] In yet a further embodiment, the extent of the motion at a joint is examined and compared in time with existing data and the differences are analyzed.

[0036] In still a further embodiment, a long- wavelength infrared (LWIR) camera or hyperspectral camera is used to convert an image taken at night into a black and white photo using the capabilities of automatic and human matching, thermal imaging, and polarized thermal radiation where the polarimetric information enhances the geometric and textural details that might be present in pre-existing dynamic identity recognition files or dynamic changes occurring in real-time

[0037] In yet a further embodiment, the body portion of the person is in the form of a finger or a hand of the person, the one or more multispectral or hyperspectral cameras are configured to capture multispectral images of the finger or the hand of the person in an initial uncompressed state and a subsequent compressed state, and the data processing device is specially programmed to verify an identity of the person using the multispectral images of the finger or the hand of the person in both the compressed and uncompressed states by taking into account ridges and/or folds on a surface of the finger or the hand of the person and subsurface blood flow through both compressed and uncompressed capillaries in the finger or the hand of the person.

[0038] In still a further embodiment, the body portion of the person is in the form of a finger, hand, or face of the person, one or more 3-4 dimensional multispectral or

hyperspectral cameras are configured so as to surround the finger, hand, or face (e.g., two cameras top right and left on the upper side of the finger, hand, or face, and two cameras right and left from below the finger, hand, or face) to capture multispectral images of the finger, hand, or the face of the person in an initial state and a subsequent changed state, and the data processing device is specially programmed to verify an identity of the person using the multispectral images of the finger, the hand, or the face of the person in both the initial and changed states by taking into account ridges and/or folds on a surface of the finger, the hand, or the face of the person and subsurface blood flow through both compressed and uncompressed capillaries in the finger, the hand, or face of the person where the images are stitched to create a 3-D image of 360 degrees of the person that can be rotated in any direction via a software and algorithm for evaluation and recognition of the person.

[0039] In yet a further embodiment, the data processing device is specially programmed to project a grid of points over an area of the body portion of the person so as to determine the displacement of the body portion of the person over the predetermined duration of time in a two-dimensional, three-dimensional manner, or four-dimensional manner, which includes the time as the fourth dimension of the image, and the dynamic changes that can be isolated or evaluated as a whole.

[0040] In still a further embodiment, the data obtained before a dynamic change is subtracted by a processor to evaluate those changes or compare those changes to the initial data obtained from the patient or any other person confirming the identity of the patient or person.

[0041] In yet a further embodiment, the data processing device is specially programmed to execute a subtraction algorithm for comparing a displacement of a subtracted image of the body portion of the person over the predetermined duration of time to a reference subtracted image of the body portion of the person acquired prior to the displacement in a two- dimensional or three-dimensional manner.

[0042] In still a further embodiment, the dynamic imaging system is in the form of a dynamic facial recognition system, and the body portion of the person for which the displacement is determined comprises a portion of the face of the person. In this embodiment, the dynamic imaging system further comprises a voice recognition sensor configured to capture speech waveforms generated by the person so that the speech waveforms are capable of being superimposed on a displacement curve of the portion of the face of the person generated from the captured images acquired using the dynamic facial recognition system. Also, in this embodiment, the dynamic imaging system further comprises a fingerprint or hand sensor configured to capture multispectral images of a finger or hand of the person, the data processing device being specially programmed to determine surface features and subsurface features of the finger or hand of the person using the multispectral images. Further, in this embodiment, an identify of the person is verified using identity comparison results generated from the dynamic facial recognition system, the voice recognition sensor, and the fingerprint or hand sensor so as to verify the identity of the person with certainty. [0043] In yet a further embodiment, the body portion of the person being tracked by the imaging device comprises a retina, a limb, or other part of the body involving a physiological function, and wherein the dynamic imaging system is configured to dynamically analyze changes in the retina, the limb, or the other part of the body over a period of time by tracking the changes in the retina, the limb, or the other part of the body in a two-dimensional or three- dimensional manner over the period of time.

[0044] In still a further embodiment, the data processing device of the dynamic imaging system is operatively coupled to an artificial intelligence system and/or augmented intelligence system by means of an internet-based network connection so as to allow a user to access information contained on the artificial intelligence system and/or the augmented intelligence system.

[0045] In yet a further embodiment, the data processing device of the dynamic imaging system is operatively coupled to a virtual reality system, an augmented reality system, or other video-based system so that a user is able to view the captured images via a live feed in real-time.

[0046] In still a further embodiment, the virtual reality system, the augmented reality system, or the other video-based system is configured so as to enable the user to zoom in and out on the captured images of the body portion of the person.

[0047] It is to be understood that the foregoing general description and the following detailed description of the present invention are merely exemplary and explanatory in nature. As such, the foregoing general description and the following detailed description of the invention should not be construed to limit the scope of the appended claims in any sense.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

[0048] The invention will now be described, by way of example, with reference to the accompanying drawings, in which:

[0049] FIG. 1 is a schematic illustration of an exemplary embodiment of a telemedicine system, in accordance with the invention, wherein the telemedicine system is provided with a dynamic imaging system that includes an image recognition sensor;

[0050] FIG. 2a is an illustration of a human face, depicting a grid that is projected over the facial area being analyzed using the dynamic imaging system described herein, and further depicting the positions of two exemplary points being used to track dynamic changes in the lips of the person; [0051] FIG. 2b is another illustration of a human face, depicting a grid that is projected over the facial area being analyzed using the dynamic imaging system described herein, and further depicting the positions of two exemplary points being used to track dynamic changes in the lips of the person while the person is speaking the letter "O";

[0052] FIG. 3a is an illustration of a finger being pressed against a transparent glass surface so that the tip of the finger is capable of being imaged, according to one embodiment of the invention;

[0053] FIG. 3b depicts a finger before touching a transparent glass surface used for the imaging of the finger;

[0054] FIG. 3c depicts the finger touching the transparent glass surface, the finger undergoing imaging that takes into account both surface and subsurface properties of the finger in a two-three dimensional and/or three-dimensional manner;

[0055] FIG. 4a is an illustration of a finger being pressed against a transparent glass surface so that the tip of the finger is capable of being imaged, according to another embodiment of the invention using multiple cameras for 360 degree image capture;

[0056] FIG. 4b depicts a finger before touching a transparent glass surface used for the three-dimensional imaging of the finger;

[0057] FIG. 4c depicts the finger touching the transparent glass surface, the finger undergoing imaging that takes into account both surface and subsurface properties of the finger in a three-dimensional manner;

[0058] FIG. 5a is an illustration of a human face, depicting folds on the facial area being analyzed using the dynamic imaging system described herein, and further depicting distances between facial features being used to track dynamic changes in facial expressions of the person;

[0059] FIG. 5b is another illustration of the human face of FIG. 5 a, wherein the person is smiling, and dynamic changes in the facial features of the person are being analyzed as well as the trends in those features;

[0060] FIG. 5c is another illustration of the human face of FIG. 5a, wherein the person is frowning, and dynamic changes in the facial features of the person are being analyzed as well as the trends in those changes;

[0061] FIG. 5d is an illustration of a first subtracted image of the human face of FIG. 5b, wherein the subtracted image further allows dynamic changes in the mouth of the person and the trends in those changes to be analyzed; [0062] FIG. 5e is an illustration of a second subtracted image of the human face of

FIG. 5b, wherein the subtracted image depicts enhanced folds on the facial area of the person

(Detail "A");

[0063] FIG. 6a is an illustration tracking the growth of a tumor over time, wherein dynamic changes in the growth of the tumor are being analyzed;

[0064] FIG. 6b is a subtracted image of the tumor of FIG. 6a;

[0065] FIG. 7a is an illustration of a human face, depicting folds on the facial area being analyzed using the dynamic imaging system described herein while the person is speaking one or more words and the sound frequency of the person's voice is being simultaneously analyzed; and

[0066] FIG. 7b is another illustration of a human face, depicting folds on the facial area being analyzed using the dynamic imaging system described herein while the person is speaking one or more words and the sound frequency of the person' s voice is being correlated with the dynamic changes occurring in the person's face.

[0067] Throughout the figures, the same elements are always denoted using the same reference characters so that, as a general rule, they will only be described once.

DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

[0068] In one or more embodiments, a dynamic image recognition system may include an imaging device configured to capture images of a body portion of a person over a predetermined duration of time so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time; and a data processing device operatively coupled to the imaging device, the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time using the captured images, and to compare the displacement of the body portion of the person over the predetermined duration of time to a reference displacement of the body portion of the person acquired prior to the displacement so that dynamic changes in the body portion of the person are capable of being assessed and subtracted for the purpose of identifying the person or evaluating physiological changes in the body portion and the trends of those changes. In these one or more embodiments, at least two points of displacement may be used for the comparison of the subsequent displacement of the body portion of the person to the reference displacements and their trends. In one or more embodiments, the dynamic imaging system may be provided as an independent system (e.g., with components 59a, 62, 73, and 75 of FIG. 1). Alternatively, in one or more other embodiments, the dynamic imaging system may be incorporated in the illustrative telemedicine system described hereinafter.

[0069] Also, in one or more embodiments, the dynamic imaging system may include an imaging device configured to capture images of a body portion of a person in his or her (1) normal state without displacement of a body part (e.g. a face), then (2) during the induced displacement over a predetermined duration of time (e.g., while the person is smiling or frowning or lifting his or her brow or "showing his or her teeth", etc.) so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time and 2-dimensional, 3-dimensional, and/or 4-dimensional images are able to be obtained which are time dependent; and a data processing device operatively coupled to the imaging device, including adjusting automatically the angulation of magnification, etc., the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time using the captured images, and to compare the displacement of the body portion of the person over the predetermined duration of time to a reference displacement of the body portion of the person acquired prior to the displacement so that dynamic changes in the body portion of the person are capable of being assessed for the purpose of identifying the person or evaluating physical and physiological changes and their trends in the body portion.

[0070] In addition, in one or more embodiments, the dynamic imaging system may include an imaging device configured to capture images of a body portion of a person in his or her (1) normal state without displacement of a body part (e.g., a face), then (2) during the induced displacement over a predetermined duration of time (e.g. while the person is smiling or frowning or lifting his or her brow or show his or her teeth, etc.) so that a displacement of the body portion of the person is capable of being tracked during the predetermined duration of time and 2-dimensional, 3-dimensional, and/or 4-dimensional images are able to be obtained which are time dependent; and a data processing device operatively coupled to the imaging device, the data processing device being specially programmed to determine the displacement of the body portion of the person over the predetermined duration of time while some invisible folds and wrinkles are enhanced or the person makes the teeth or the tongue visible, etc., thus producing some more characteristic data of the person for identification that can be obtained if the images prior to displacement and afterward are subtracted to obtain a third image which is still specific to the person, such as position of the teeth and their specific characteristics, or the enhanced position and changes of the wrinkles, etc., thereby enhancing the weight of the data and identity recognition.

[0071] The displacement or change can be improvement or worsening of a condition, such as part of the body (e.g., the face), a disease process, or a structure or appearance of an agricultural field caused by internal or external factors, such as weather, season, pests or disease process, etc., each of which is considered a dynamic change. The ability to image or obtain data and follow its dynamic changes and analyze it by a subtraction algorithm by pixelation or data collection mathematically is considered dynamic recognition and creates an almost exact comparison of a structure and its subsequent induced changes (data) as a dynamic identity recognition and can better predict the subsequent changes or direction in a dynamic format, in contrast to the existing static image.

[0072] In a further embodiment, the imaging device may be a hyperspectral camera in which the image can be isolated as produced by different wavelengths including infrared or, for example, in the form of infrared sensors as a whole, or selection and enhancing certain wavelengths of infrared to represent better the environment or in the body or the face, skin, or mucosa showing changes in blood circulation under the skin or, for example, an infection, a tumor, etc. in a given time, or showing collapsed capillaries in the wrinkles or fold enhancing alone or with the visible spectrum the weight of the data obtained for dynamic identity recognition of a person or a disease process, etc. In other embodiments, other suitable cameras may also be used for the imaging device.

[0073] In yet a further embodiment, the dynamic imaging system may be in the form of a dynamic facial recognition system, and where the body portion of the person for which the displacement is determined comprises a portion of the face of the person imaged in a two- dimensional or three-dimensional manner in time after the person is instructed to perform a certain action in order to enhance changes in the portion of the face (e.g., enhancing wrinkles of the face by asking the person to frown, or enhancing facial features by asking the person to smile, thus showing the teeth and enhancing the appearance of the wrinkles around the mouth, nose, eyes, etc., and displacing the wrinkles for a given time while showing the teeth). The number of the teeth and the configuration of the teeth are by themselves unique to a person while all other parameters are kept for the data analysis (e.g., change in the teeth if they are replaced will be recognized or a cosmetic surgery will affect the blood circulation in that area). [0074] In still a further embodiment, the dynamic imaging system is in the form of a dynamic facial recognition system, and where the body portion of the person for which the displacement is determined comprises a portion of the face of the person imaged in a two- dimensional or three-dimensional manner, and the system is specially programmed to analyze different, induced changes performed in a sequential manner. For example, a security agency may require the induced changes to be repeated in a different manner (e.g., if the patient initially smiles, he may then be asked to frown), or follow it indefinitely in a different manner and random manner (e.g., ask the person to show his or her teeth, perform another action, perform yet another action, etc.), and the system records the data for analysis to make it almost impossible to be repeated by another person.

[0075] In a further embodiment, the imaging device may be equipped with a tracking system to be able to track a relatively stable area of a person's image, such as forehead, while documenting, the dynamic changes of the other areas.

[0076] Now, turning to FIG. 1, an illustrative embodiment of a telemedicine system is depicted generally at 50' . In FIG. 1, the telemedicine system 50' is represented in schematic form. As shown in this figure, the system 50' includes a plurality of local control systems disposed at local sites 51a' , 51b', 51c' ; each system including a telemedicine imaging apparatus 55. Also, in the illustrative embodiment of the invention, each telemedicine imaging apparatus 55 may include a photoacoustic system.

[0077] Preferably, each telemedicine imaging apparatus 55 of the system 50' is in communication with a local control module 62 and control-processing means 59a (see FIG. 1). In the illustrative embodiment, the control-processing means 59a may be embodied as a local personal computer or local computing device that is specially programmed to carry out all of the functionality that is described herein in conjunction with the local control systems of the telemedicine system 50'.

[0078] In the illustrated embodiment of FIG. 1, each local control system also includes body tracking means 71 (e.g., eye tracking means 71) for measuring position(s) and movement(s) of a body portion of the patient (e.g., an eye of the patient). According to the invention, the body tracking means 71 (e.g., eye tracking means 71) can be an integral component or feature of the telemedicine imaging apparatus 55 or a separate system or device.

[0079] Referring again to FIG. 1, it can be seen that each local control system also includes an image recognition sensor 73 for capturing images of a subject or patient that may be used to identify and/or verify the identity of the subject or patient. Advantageously, the positive identification and verification of the identity of the subject or patient receiving treatment prevents mistakes wherein the wrong subject or patient is treated. In addition, rather than identifying and/or verifying the identity of the subject or patient, the image recognition capabilities of the image recognition sensor 73 may also be used to identify and verify that a particular surgical procedure is being performed on the proper body portion of a subject or patient (e.g., to verify that a laser coagulation procedure is being performed on the proper one of the patient's eyes or that a surgical procedure is being performed on the proper one of the patient's limbs). As will be described hereinafter, the image recognition sensor 73 or imaging device may be provided as part of a dynamic image recognition system that is operatively coupled to the telemedicine imaging apparatus 55 located at the same or remote location. Also, as will be described hereinafter, the image recognition sensor 73 or imaging device may comprise a light field camera that is able to simultaneously record a two- dimensional image and metrically calibrated three-dimensional depth information of a scene in a single shot.

In one or more embodiments, the image recognition sensor 73 or imaging device of each local control system may be operatively connected to the local computing device, which forms the control-processing means 59a of the local control system. The local computing device may be specially programmed with image/pattern recognition software loaded thereon, and executed thereby for performing all of the functionality necessary to identify and verify a particular subject or patient, or to identify and verify a body portion of the particular subject or patient that is to be recognized. Initially, the local computing device may be specially programmed to capture and store reference dynamic information regarding a body portion of the subject or patient so that the reference dynamic information may be compared to dynamic information pertaining to the same body portion captured at a later time (i.e., just prior to the performance of the surgical procedure). Then, prior to the performance of a surgical procedure (e.g., a laser coagulation performed on the eye, or body surface, etc.), or prior to providing advice to the patient or a person, etc. for any reason (e.g., security at any location), dynamic information pertaining to the same body portion of the subject or patient is captured by the image sensor 73 (i.e., the light field camera) and the local computing device compares the subsequent dynamic information pertaining to the body portion to the reference dynamic information and determines if the subsequent dynamic information of the body portion matches or substantially matches the reference dynamic information. In one or more embodiments, this information may relate to the position or size of any part of the body, extremities, or organ (e.g., as seen on an X-ray film, MRI, CT-scan, or gait changes by walking or particular habit of head position, hand, facial changes during speech, or observation of changes due to emotional changes, or medication, or disease or trauma or neurological incidence, etc.). When the local computing device determines that the subsequent dynamic information pertaining to the body portion of the subject or patient matches or substantially matches the reference dynamic information, the local computing device is specially programmed to generate a matched identity confirmation notification or the trends of changes, etc. in a two-dimensional and/or three-dimensional manner that is sent to the remote computing device at the remote site in order to inform the attending physician or the receiving authority that the proper patient or body portion of the patient has been identified and verified or his or her response to stimuli. The matched identity confirmation notification may also be delivered to the technician or security officers at the local site via the local computing device. Then, after the other safety checks of the system 50' have been performed, the surgical procedure or planned procedure at the hospital is capable of being performed on the patient. Conversely, when the local computing device determines that the subsequent dynamic image recognition information regarding the body portion of the subject or patient does not match or substantially match the reference dynamic information, the local computing device is specially programmed to generate a non-matching identity notification that is sent to the remote computing device at the remote site in order to inform the attending physician that the patient or body portion of the patient has not been properly identified and verified. The non-matching identity notification may also be delivered to the technician at the local site via the local computing device. When the non-matching identity notification is sent to the attending physician, the local computing device also disables the surgical equipment at the local site in order to prevent the procedure from being performed on the incorrect patient or the incorrect body portion of the patient (e.g., in a laser coagulation procedure, laser firing will be automatically locked out by the local computing device) or, as another example, the person will be rejected for being admitted to the country (e.g., at customs) or to another place, or the cause of the non-matching is evaluated, etc.

[0080] Turning again to FIG. 1, it can be seen that each local control system may further include a voice recognition sensor 75 for capturing the speech sound waves generated by the subject or patient so that the speech of the subject or patient may additionally be used to identify and/or verify the identity of the subject or patient. The voice recognition subsystem described herein is capable of precisely reproducing the frequency and amplitude of each spoken word, etc. For example, similar to the voice recognition technology used in smartphones, computers, and/or other voice recording systems, microphones, loud speakers, etc., the speech waveforms captured by the present system may be transmitted for a long distance as a sound wave and their characteristics may be analyzed or amplified over a given time that can be adjustable by the system. In one or more embodiments, the voice recognition sensor 75 may be used in conjunction with the image recognition sensor 73 or imaging device described above to further verify that a surgical procedure is being performed on the correct subject, patient, or person. The voice recognition sensor may be

simultaneously overlaid on the dynamic image recognition changes for facial or body recognition. Particularly, in one or more embodiments, the sound waves generated from the data acquired by the voice recognition sensor 75 may be superimposed on the displacement curve generated from the data acquired by the imaging device (e.g., the light field camera) so that both audial and visual attributes of the person may be taken into account for

identification purposes, thereby making the identification of the person far more accurate. For example, as a person recites a series of vowels (i.e., AEIOU), the displacement of the lips of the person is recorded by the imaging device (e.g., the light field camera), while the voice recognition sensor 75 simultaneously records the voice of the person (e.g., the amplitude and/or frequency of the sound wave generated by the person). In one or more embodiments, the dynamic imaging system is used alone or simultaneously with voice overlay, thereby creating 2D or 3D images using multispectral light, which includes IR and mid IR, captured by cameras to measure time dependent changes for creation of a dynamic event made of voice and images or changes thereof. In one or more embodiments, a subtraction algorithm of the system is used to produce a clear subtracted wave/image complex from a person examined a first and second time, and to project the subtracted wave complex on the subtracted value of the person's image so as to evaluate a match or change, and present the changed values or their difference as compared to the sound waves received the first time by the voice recognition sensor 75. In one or more embodiments, the voice recognition sensor 75 may comprise a microphone that captures the speech of the subject or patient over the entire speech frequency range of a human being (e.g., for a frequency range from 50 Hz to 5,000 Hz to encompass the typical frequency range for both males and females). As such, the syntax and sound frequencies generated by the subject or patient are capable of being used by the local control system for verification and identification of the subject prior to a surgical procedure being performed on him or her and eliminate the unfortunate consequences of mistaken one patient for another one. The syntax and sound frequencies generated by the patient also are capable of being used by the local control system for verification and identification of the patient prior to a procedure being performed in a hospital, or for other uses, such as use by custom officials or other security agencies. In one or more

embodiments, the voice recognition sensor 75 may be used as a second means of patient/ person identity confirmation in order to confirm the identity of the patient that was previously verified by the image recognition sensor 73 or imaging device. In other words, the image recognition sensor 73 or imaging device may comprise a first stage of patient identity confirmation, and the voice recognition sensor 75 may comprise a second stage of patient identity confirmation.

[0081] Similar to the image recognition sensor 73 or imaging device described above, the voice recognition sensor 75 of each illustrative local control system may be operatively connected to the local computing device, which forms the control-processing means 59a of the local control system. The local computing device may be specially programmed with voice recognition software loaded thereon, and executed thereby for performing all of the functionality necessary to identify and verify a particular subject or patient that is to be recognized. Initially, the local computing device may be specially programmed to capture and store a first reference speech waveform of the subject or patient so that the first reference speech waveform may be compared to a second speech waveform of the same patient or subject captured at a later time (e.g., just prior to making a decision about a patient or a person). That is, the patient or subject may be asked to say a particular word, a plurality of words, or a series of vowels (i.e., AEIOU) that are captured by the voice recognition sensor 75 so that it can be used as the first reference speech waveform. Then, prior to the performance of the surgical procedure (e.g., a laser coagulation performed on the eye or the body etc.) or admission in a secure area, the second speech waveform of the subject or patient is captured by the voice sensor 75 (i.e., the microphone records the same word, plurality of words, or series of vowels repeated by the subject or patient) and the local computing device compares the second speech waveform of the patient or subject to the first reference speech waveform and determines if the second speech waveform of the subject or patient matches or substantially matches the first reference speech waveform (i.e., by comparing the frequency content of the first and second speech sound waves). When the local computing device determines that the second speech waveform of the subject or patient matches or substantially matches the first reference speech waveform, the local computing device is specially programmed to generate a matched speech confirmation notification that is sent to the remote computing device at the remote site in order to inform the attending physician that the proper patient has been identified and verified. The matched speech confirmation notification or its discrepancies may also be delivered to the technician at the local site via the local computing device. Then, after the other safety checks of the system 50' have been performed, a surgical procedure is capable of being performed on the patient, or the identity of a person can be confirmed in various security related circumstances. Conversely, when the local computing device determines that the second speech waveform of the subject or patient does not match or substantially match the first reference speech waveform, the local computing device is specially programmed to generate a non-matching speech notification that is sent to the remote computing device at the remote site in order to inform (e.g., a physician/an authorized person, security department, etc.) that the patient/person has not been properly identified and verified. The non-matching speech/image notification may also be delivered to the authorized person, technician, etc. at the local site via the local computing device. When the non-matching speech notification is sent to the authorized person, the local computing device also disables surgical equipment at the local site in order to prevent the procedure from being performed, or before a medical procedure or instruction is completed on the incorrect patient/ person (e.g., in a laser coagulation procedure, laser firing will be automatically locked out by the local computing device) or the entrance of a person into a secure location will be prevented. In one or more embodiments, the voice changes representing various emotional stages of the patients may be recorded for diagnosis or excessive stimuli, such as emotion, pain, or satisfaction, or if a person is under the influence of a substance, etc.

[0082] In one embodiment, the local computing device is further programmed to ask the patient/person a question so that the patient/person is able to respond to the question posed to him or her using natural language. That way, the system is not only able to analyze the physical surface attributes of the patient, but also analyze the sound of the patient's voice (i.e., by voice recognition recording), and communicate simultaneously with accessible existing data in a computer database to verify the identity of the patient. In one embodiment, the dynamic image/voice recognition is a tacit collaboration or consent of the patient or the person being identified in contrast to existing facial recognition systems that are not dynamic and the data obtained from these existing systems is very limited and does not imply a person' s consent. [0083] Also, as shown in FIG. 1, the telemedicine system 50' of the illustrative embodiment also includes a central control system 58 at a remote site having a command computer 59b that is operatively connected to a remote control module 64. Also, an operator (e.g., an authorized person or a custom agent, etc.) may be disposed at the remote site 58 during the person's imaging by the image recognition system. In FIG. 1, it can be seen that the central control system 58 at the remote site, which includes the command computer 59b, is operatively connected to the plurality of local control systems disposed at local sites 51a', 51b' , 51c' via a computer network that uses the Internet 60.

[0084] In one embodiment, the time and the information obtained from the patient/person is recorded and stored for the future recall. In one embodiment, for a patient/person whose history does not yet exist in the computer database, the information will be recorded so as to be recognized by the system the next time the patient comes for reexamination.

[0085] In one embodiment, the system can access other computers searching for the existence of similar cases and photos of a surface lesion or marks for a patient (e.g., photograph, X-ray, CT-scan, MRI, PET-scan, etc. of a lesion, etc.). The tele-image recognition system may also be used to access existing data in the published literature, such as an artificial intelligence (AI) system, or augmented reality system (e.g., IBM Watson), to assist the doctor with new information, images, therapies, medications used, predicting the outcome, etc. or a security agency with the person's recognition or lesion/symptoms or recognition for security reasons, diagnosis and further therapy recommendation for the patient.

[0086] In one embodiment, the computer system functions as an informational unit augmenting the knowledge of the doctor and assists in presenting him or her with similar recorded cases to assist in a better telemedicine diagnosis and management. In another embodiment, the system assists the authorized security agent with the recognition of the person or similarity of the existing history related to a subject.

[0087] In one embodiment, the system may augment recognition of the patient/person by using additional information from a fingerprint, etc. For example, information regarding the tip of the patient's finger may be recorded (e.g., the blood circulation or thermal image in the finger differentiating a person from a robot, as well as images of the ridges and minutiae, etc. of the fingerprint). Also, the touching of a removable surface permits obtaining DNA if a crime is involved, or spectroscopy and facial "fingerprint-wrinkles" of the person to be used for identifying the person/patient in conjunction with the dynamic facial recognition of the person in a two-dimensional and/or three-dimensional manner using multiple cameras 142 (e.g., two to four cameras) to observe the object (e.g., a finger 138 on transparent surface 140) 360 degrees in 3D (see FIG. 4a). As such, the system advantageously provides augmented dynamic identification of the person/patient.

[0088] In one embodiment, the system described herein is used in artificial intelligence or augmented intelligence to see and predict the trend of an action for a vehicle, manufacturing, etc.

[0089] In one embodiment, the system is used to recognize a pilot who is flying a commercial plane or other uses of an airplane, or operating a train, driverless car, ship, drone, etc. The system may also be used to recognize personnel in the security department or military, or to recognize a doctor or patient in the operating room.

[0090] In one embodiment, the system works as an augmented intelligence assistant so as to assist a person in to making a proper decision (e.g., proper treatment of a cancer patient).

[0091] In one embodiment, the system can accept or reject a customer, such as in remote banking and/or in an ATM system.

[0092] In one embodiment, the information obtained by the system is encrypted and transmitted so as to make it virtually impossible to be hacked by a third party.

[0093] In one embodiment the system can differentiate between a human person and robot by its multispectral or hyperspectral camera analysis recording the body's temperature versus the relatively cold body of a robot, a photograph, and the reflective properties of its body surface, etc. and the variation of the temperature induced unknowingly by dynamic displacement of the skin folds or wrinkles instantaneously depending on the induced dynamic changes and analyzed instantaneously to differentiate a living body from a non-living object or robot.

[0094] As mentioned above, in the illustrative embodiment, the image recognition sensor 73 or imaging device at the remote laser delivery site may be in the form of a digital light field photography (DLFP) camera with microlenses that capture the information about the direction of the incoming light rays and a photosensor array that is disposed behind the microlenses. A specially programmed data processing device (e.g., a computer) is used to process the information obtained from the light field camera. As explained above, the light field camera may be an integral part of the aforedescribed tele-image recognition system or may alternatively be provided as part of an independent dynamic imaging system. [0095] In one or more embodiments, the light field digital camera or digital light field photography (DIFLFP) camera comprises one or more fixed optical element(s) as the objective lens providing a fixed field of view for the camera. A series of microlenses are located at the focal point of the objective lens in a flat plane perpendicular to the axial rays of the objective lens. These microlenses separate the incoming rays of light entering the camera into individual small bundles. The individual small bundles of light are refracted on a series of light sensitive sensors that measure in hundreds of megapixels, which are located behind the plane of the microlenses, thereby converting the light energy into electrical signals. The electronically generated signals convey information regarding the direction of each light ray, view, and the intensity of each light ray to a processor or a computer. Each microlens has some overlapping view and perspective from the next one which can be retraced by an algorithm. Appropriate software and algorithms reconstruct computer-generated 2-3 D images from the objects not only in focus, but also those located in front or in the back of the object from 0.1 mm from the lens surface to infinity, which is being photographed by retracing the rays via the software and algorithm that modifies, or magnifies the image, as desired, while eliminating electronically the image aberrations, reflections, etc. As such, by using a digital camera, one can manipulate the image data by using an appropriate algorithm so as to create new in focus image(s) mathematically, and the processor can combine deep learning and depth information, to obtain 3-D images using convolutional neural networks (CNN) for a rapid dynamic image recognition.

[0096] In one or more embodiments, the light sensitive sensors behind the lenslets of the camera record the incoming light and forward it as electrical signals to the camera' s processor and act as an on/off switch for the camera' s processor measuring the intensity of the light through its neuronal network and its algorithm to record changes in light intensity, while recording any motion or dynamic displacement of an object or part of an object in front of the camera in a nanosecond to a microsecond of time. The processor of the camera with its neuronal network algorithm processes the images as the retina and brain in a human being functions by finding the pattern in the data and its dynamic changes of the image and its trend over a very short period of time (e.g., nanosecond). The information is stored in the memory system of the camera's processor, as known in the art, as memory resistor (memristor) relating to electric charge and magnetic flux linkage, which can be retrieved immediately or later, and further analyzed by the known mathematical algorithms of the camera and can be used for many different applications, in addition to applications for 2-3 D dynamic image recognition as near or remote subjects recognition, incorporated in the remote telemedicine system described above. However, in addition to being used in the aforedescribed tele-image recognition system, the light field digital camera described herein has other independent applications, such as in artificial intelligence, smartphones, self-driving vehicles (e.g., self- driving cars), drones, also in crowd sourcing and surveillance, use by security agencies or in home security systems, use by customs officials in airports, or in other applications where large numbers of people gather, such as sporting events, movie theaters, and sports stadiums.

[0097] In one or more embodiments, the light field camera may have either a tunable lens or a fluidic lens that will be described hereinafter. If a tunable lens is utilized, the tunable lens may be in the form of a shape-changing polymer lens (e.g., an Optotune ® lens), a liquid crystal lens, an electrically tunable lens (e.g., using electrowetting, such as a Varioptic ® lens). Alternatively, the fluidic lens described hereinafter may be used in the light field camera.

[0098] In one or more illustrative embodiments of the light field camera using a fluidic lens, the digital in-focus, light field photography (DIFLFP) camera provides a variable field of view and variable focal points from the objective tunable lens, in one second to a millisecond, from an object located just in front of the objective lens to infinity, as the light rays pass through a microlens array in the back of the camera and a layer of sensors made of light sensitive quantum dots, which along with microlens layer, create a concave structure. As such, the lens generates more light and signal information from variable focal points of the flexible fluidic lens that are capable of being used by a software algorithm of a processor so as to produce 2-3-4 D images in real-time or video. The generated images reduce the need for loss of light, which occur in refocusing the rays in standard light field cameras, but use the direction and intensity of light rays to obtain a sharp image from any distance from the lens surface to infinity, thereby producing in one cycle of changing the focal point of a tunable or hybrid fluidic objective lens of the camera electronically, or using simultaneously a microfluidic pump creating a maximum convexity in the lens to least amount and return.

[0099] In one or more embodiments, the fluidic lens is dynamic because the plane of the image inside the camera moves forward or backward with each electric pulse applied to the piezoelectric or a microfluidic pump motor transmitting a wave of fluid flow inside or aspirating the fluid from the lens cavity so that the membrane returns to the original position, thereby creating either a more or less a convex lens, or a minus lens when the back side has a glass plate with a concave shape. [00100] In one embodiment, the lens of the light field camera is only a flexible transparent membrane that covers the opening of the camera's cavity in which the fluid or air is injected or removed so as to create a convex or concave surface using a simple piezoelectric attachment that can push the wall of the camera locally inward or outward thereby forcing the transparent membrane that acts like a lens to be convex or concave and changes in the focal point from a few millimeters (mm) to infinity and return while all data points are recorded and analyzed by its software.

[00101] In one or more embodiments of the light field camera with the fluidic lens, the light rays entering the camera pass through the microlenses located in the back of the camera directly to the sensors made of nanoparticles, such as quantum dots (QDs) made of graphene, etc.

[00102] In one or more embodiments of the light field camera with the fluidic lens, the camera obtains a subset of signals from the right or left side of the microlens and sensor array separately to reconstruct the 3-D image from the information.

[00103] In one or more embodiments of the light field camera with the fluidic lens, the fluidic lens converts the light rays focused either anterior or posterior of the focal plane of the microlens/sensor plane to electrical signals, which are transmitted to the camera's processor with the software algorithm loaded thereon so that the images may be displayed as static 2-D or 3-D multispectral or hyperspectral images or so that a tomographic image or a video of a moveable object may be created.

[00104] In one or more embodiments of the light field camera with the fluidic lens, the right or left portion of the sensors are capable of displaying from either a slightly anterior or posteriorly located focal point to the microlens, thereby providing more depth to the image without losing the light intensity of the camera, as is the case with the standard light field camera having a static objective lens or a static membrane, which is entirely dependent on producing a virtual image obtained from a fixed focal point.

[00105] In one or more embodiments of the light field camera with the fluidic lens, a prismatic lens may be disposed between the microlens array and the sensors so that individual wavelengths may be separated to produce color photography or multispectral images including the infrared or near infrared images.

[00106] In one or more embodiments of the light field camera with the fluidic lens, the process of focusing and defocusing collects more light rays that may be used to create 2D or 3D or 4D images. [00107] In one or more embodiments of the light field camera with the fluidic lens, the fluidic lens can change its surface by injecting and withdrawing the fluid from the lens and returning to its original shape in a time range of one second to less than a millisecond, thereby allowing the light rays to be recorded that pass through a single row or multiple rows of microlenses before reaching the sensor layer of quantum dots or monolayer of graphene or any semiconductor nanoparticles that absorb the light energy and convert it to an electrical signal.

[00108] In one or more embodiments of the light field camera, the flexible transparent membrane can change its surface by injecting and withdrawing the fluid/air from the cameras cavity and returning to its original shape in a time range of one second to less than a millisecond, thereby allowing the light rays to be recorded that pass through a single row or multiple rows of microlenses before reaching the sensor layer of quantum dots or monolayer of graphene or any semiconductor nanoparticles that absorb the light energy and convert it to an electrical signal.

[00109] In one or more embodiments of the light field camera with the fluidic lens, by pumping fluid in the fluidic microlens system, the field of the view of the lens is expanded and returns to its original position upon its relaxation. During this period of time, the light rays that have entered the system have passed through a series of microlenses which project the rays on a layer of photosensors to become stimulated, thereby creating an electrical current traveling to a processor or computer with a software algorithm loaded thereon to analyze and create a digital image from the outside world. In one or more embodiments, the microlens array of the fluidic lens may include a pluggable adaptor.

[00110] In one or more embodiments of the light field camera with the fluidic lens, the microlenses and the layer of sensors extend outward so as to create a concave structure inside the camera, thereby permitting the incoming light rays of the peripheral field of view to be projected on the peripherally located microlens and sensors of the camera so as to be absorbed and transferred to the processor with the algorithm loaded thereon, which mathematically analyzes, manipulates, and records the light data so as to provide a combination of signals that shows the direction from which the rays emanated.

[00111] In one or more embodiments of the light field camera with the fluidic lens, the microlens array is in the form of graded-index (GRIN) lens array so as to provide excellent resolution. [00112] In one or more embodiments of the light field camera with the fluidic lens or transparent flexible membrane, the microlens array is separated from another smaller nanosized lenses array attached to a filter, followed by the sensors to differentiate the color wavelength.

[00113] In one or more embodiments of the light field camera with the fluidic lens, the deformable objective lens, by changing its lens refractive power, its field of view, and its focus, transmits significantly more information, in one millisecond cycle, to the computer than a single static lens or simple lensless membrane with compressive sensing without microlenses is capable of doing, but also maintains more signals in its unlimited focal points sufficient data that is able to be easily reproduced or refocused instantaneously or later by the camera' s software algorithms so as to create sharp images in 2-3 dimensions or 4 dimensions. However, the exposure time can be prolonged or shortened, as needed, by repeating the cycle of recording from less than one Hertz to > 30 Hertz to thousands of Hertz or more enough for cinematography while the light rays pass through unlimited focal points of the lens back and forth of the sensors to the back of the lens covering a long distance from, a few mm to infinity achieving fast sharp images by retracing and mathematical reconstruction as compared to a photo taken from a camera with a solid fixed objective lens.

[00114] In one or more embodiments of the light field camera with the fluidic lens, the signals also can be analyzed by the algorithm of the computer located outside the camera for any object that is photographed at any given distance.

[00115] In one or more embodiments of the light field camera with the fluidic lens, the camera's processor or a computer can retrace the rays toward any direction of the light rays, thereby simultaneously eliminating refractive aberrations or motion blur while the light is focused over any distance before or beyond the focal point of the lens using the computer software.

[00116] In one or more embodiments, the fluidic light field camera will provide an immense amount of data during the short period of time that the lens membrane is displaced as a result of pumping fluid inside the system and withdrawing it, the forward and backward movement creating three dimensional images with depth of focus, which are easily recreated without sacrificing the resolution of the image or need for "focus bracketing" to extend the re- focusable range by capturing 3 or 5 consecutive images at different depths as is done in standard light field cameras with the complete parameterization of light in space as a virtual hologram. [00117] In one or more embodiments, the objective lens of the digital light field

photography (DIFLFP) camera is a fluidic lens in which the power of the lens varies from - 3.00 to +30.00 dioptric power depending on the amount of fluid either injected or withdrawn with a micro-pump into the fluidic lens with an aperture of 2 to 10 millimeters (mm) or more.

[00118] In one or more embodiments, the objective lens is a liquid or tunable lens, such as an electrically and mechanically tunable lens controlling the focal length of the lens.

[00119] In one or more embodiments, the tunable lens is a liquid crystal, and molecules of the liquid crystal are capable of being rearranged using an electric voltage signal.

[00120] In one or more embodiments, the digital light field photography (DIFLFP) camera utilizes a hybrid lens, as described in Applicant's U.S. Pat. No. 9,671,607, which is incorporated by reference herein in its entirety. In such a hybrid lens, the increase or decrease of the fluid in the fluidic lens chamber occurs electronically with either a servo motor, or a piezoelectric system for a rapid response.

[00121] In one or more embodiments, the DIFLFP camera system obtains image and depth information at the same time.

[00122] In one or more embodiments, during the photography, the increase or decrease of the fluid in the fluidic lens is done in a high frequency changing the focal plane of the fluidic lens during the time which a million or billion light rays are sensed and recorded for analysis.

[00123] In one or more embodiments, the rays of the light are collected from a wide concave surface of the sensor arrays located behind hundreds of thousands of microlenses that curve up in the back of the camera during the change in the focal point of the fluidic lens, which also creates a wider field of view, producing millions to billions of electronic pulses from which the sharp wide field images or videos are reconstructed by the specially programmed computer in a 2-3-4 dimensional manner from the objects at any desired distance in the field of view without losing the sharpness of the image.

[00124] In one or more embodiments, the DIFLFP camera captures light from a wider field that increases or decreases the field of view rather than fixed objective lenses or compressive cameras with their assembly apertures.

[00125] In one or more embodiments, the objective lens is a composite lens of fluidic and a solid lens, a diffractive lens or a liquid crystal coating with electronic control of its refractive power. [00126] In one or more embodiments, the microlenses are replaced with transparent photosensors where the sensors directly communicate with the processor and software algorithm to build desired images.

[00127] In one or more embodiments, the solid lens is located behind the flexible membrane of the fluidic lens or inside the fluidic lens providing a wider field of view and higher magnification.

[00128] In one or more embodiments, the additional lens can be a convex or a concave lens to build a Galilean or astronomical telescope.

[00129] In one or more embodiments, the lens is replaced with a flexible membrane that is capable of moving forward or backward and having on its surface a two dimensional aperture assembly providing a wider field of view when the lens becomes more convex pushing the membrane's surface forward than the standard lensless light field cameras.

[00130] In still one or more further embodiments, the objective lens of the light field camera is only a transparent flexible membrane supported by the outer housing of the camera's cavity, or housing defining the camera's chamber which receives a fluid therein (e.g., air or another gas) through a cannula. When the fluid is injected in the camera's cavity, the flexible transparent membrane bulges out acting as convex lens, and when the fluid is withdrawn from the camera' s cavity, the membrane becomes a flat transparent surface, then assumes a concave shape and acts as a minus lens when the light passes through it to reach the lenslets and the sensors in the back of the fluidic field camera that are connected to a processor.

[00131] In yet one or more further embodiments, the objective lens of the light field camera may use a compressible polymer, such as silicone, etc., that changes its surface curvature based on the physical pressure applied to the lens.

[00132] In still one or more further embodiments, a simple flexible transparent membrane acts like a lens where its surface convexity or concavity can be controlled to act as a lens.

[00133] In one or more embodiments of the DIFLFP camera, there are numerous microlenses in the focal plane of the liquid lens.

[00134] In one or more embodiments, the microlenses are 3-D printed to less than 1 micrometer in diameter, lens structure, and are nanolenses of less than 10 nanometers (nm).

[00135] In one or more embodiments, the microlenses are 3-D printed from silicone, or any other transparent polymer.

[00136] In one or more embodiments, the sensors are 3-D printed and placed in the camera.

[00137] In one or more embodiments, the camera wall is 3-D printed. [00138] In one or more embodiments, the two dimensional microlens plane ends slightly forward forming a concave plane to capture more light from the peripheral objective lens surfaces areas of the liquid lens as it moves forward and backward.

[00139] In one or more embodiments, the plane of the sensor array follows the curvature of the forwardly disposed microlens plane for building a concave structure (refer to FIGS. 3 and 4).

[00140] In one or more embodiments of the DIFLFP camera, the light sensors obtain information on the direction and light intensity from a wide field of view.

[00141] In one or more embodiments, the sensors provide electronic pulse information to a processor or a computer, equipped with a software algorithm to produce desired sharp monochromatic or color 2-4 D images.

[00142] In one or more embodiments, the computer is powerful enough to obtain a million or billion bits of information, having a software algorithm to provide images from any object located in the field of view before or behind a photographed object ranging from a very short distance from the objective lens surface to infinity.

[00143] In one or more embodiments of the DIFLFP camera, the computer and its software algorithm is capable of producing 2-3-4 dimensional sharp images, with desired

magnification, and in color form, for any object located in front of the camera.

[00144] In one or more embodiments, the camera can provide an instant video in a 2-3 D image projected on an LCD monitor located in the back of the camera.

[00145] In one or more embodiments, the photos or videos captured using the camera are sent electronically via the internet to another computer using the GPU system, etc.

[00146] In one or more embodiments, using DIFLFP live video, time -related images can be presented in the fourth dimension with real-time high speed processing. To achieve high speed processing, a graphics processing unit (GPU), a programmable logic chip or field programmable gate array (FPGAs) may be provided along with a high-performance processor as VLIW (Very Long Instruction Word) core and a digital signal processor (DSP) microprocessor.

[00147] In one or more embodiments, the DIFLFP camera is used for visualization of a live surgery that can be projected in 3-4 D using the fluidic lens light field camera in the operating microscope that is simultaneously projected back onto the ocular lenses of the operating microscope or used in robotic surgery of brain, heart, prostate, knee or any other organ with robotic surgery, electronic endoscope system, 3D marking in laser processing systems, barcode scanning, automated inspection with a distance sensor, in neuroscience research, documenting the nerves or in retinal photography where the eye cannot be exposed to the light for a long time or when long exposure time is needed in low light photography, or variable spot size in light emitting diode (LED) lighting.

[00148] In one or more embodiments, the DIFLFP camera has a distance sensor controlling the initial start of the image focused on a certain object in the field of DIFLFP field and can be used in macro or microphotography and having a liquid crystal display (LCD) touch screen.

[00149] In one or more embodiments, the wavefront phase and the distance from the object is calculated by the software measuring the degree of focusing required for two rays to focus.

[00150] In one or more embodiments, the DIFLFP camera is used for the creation of augmented reality and virtual reality.

[00151] In one or more embodiments, the DIFLFP camera is used with additional lenses in tomographic wavefront sensors, measuring amplitude and phase of the electromagnetic field.

[00152] In one or more embodiments, the DIFLFP camera can generate stereo-images for both eyes of the user to see objects stereoscopically.

[00153] In one or more embodiments, the DIFLFP camera is equipped with an auto sensor to focus on a moving object, such as in sport activities or in dynamic facial recognition.

[00154] In one or more embodiments, the light field camera may also be used as a part of a dynamic facial recognition system for patient identification and verification, as described above. Advantageously, the tunable lens, when combined with the other components of the light field camera, offers precise focusing using the microlenses, nanosensors, and computer to analyze numerous focal points that can be reconstructed mathematically using specific algorithms implemented by the data processing device (i.e., the computer of the dynamic imaging system). Also, the dynamic imaging system may use the computer to verify or identify various changes that happen during the change in physiological function of a person's facial expression (e.g., smiling or frowning) as a computer-generated digital 2D or 3D image or video frame records the dynamic changes of a structure, such as a face, mouth, eyes etc., and the computer analyzes and compares the biometrics as a dynamic physiological fingerprint with existing data of the same image. The computer algorithm analyzes the changes in the relative position of a patient's face, matching points and directions, and compressing the data obtained during the process using dynamic recognition algorithms. One exemplary technique employed is the statistical comparison of the first obtained values with the second values in order to examine the variances using a number of means including multi-spectral light that looks at the various physiological changes of the face. Mathematical patterns of the digital images and statistical algorithms are capable of demonstrating that the images obtained initially and subsequently belong to the same person, etc.

[00155] For example, the computer may compare the images using known techniques, such as elastic bunch graph matching, face recognition using a Fisherface algorithm, principal component analysis, Eigenfaces, linear discriminant analysis, hidden Markov model and multi-linear subspace learning, face recognition using dynamic link matching, or the use of computational archeology, process-based modeling , simulation and virtual reality and archeological predictive modeling, etc.

[00156] In one embodiment for 3-D object imaging, one can use sophisticated system sensing visible and infrared light one or more sensors can be placed in a CMOS chip capturing various spectrum of light. In one embodiment three to four cameras using different angle from the sides or up and down where the cameras can track the subject in real time to be stitched together and great 3-D or 4-D structure in real time to be analyzed and recognized . in one embodiment one can use the deep learning to identify the initial digital image and subsequently follow it with dynamic changes such as smile, frowning , show your teeth or close the eye or speak a few vowels record the voice over the changes of the mouth or lips to recognize the identical pixelated area of the face, changed area using subtraction algorithm evaluating the new pixelated structure can then be compared with the data obtained initially and predict those changed and the degree of the changes in addition to the voice recognition data ,thereby obtaining considerable more data than a simple facial recognition to obtain 99.999% accuracy of recognition regardless of the pigmentation of the skin or the gender.

[00157] In one embodiment the dynamic identity data can be also placed in a chip as a passport for that individual person, phone, software, computer that reveals the identity of the person evaluated e.g. at the port of entry, the dynamic analysis can be repeated many times in a different manner to obtain different set of data for comparison. In this system a big smile will not interfere with the person's recognition because everyone goes through the same step by step process and dynamic changes are merely a step in real time which ends with the subtraction algorithm predicting with additional information obtained, in real time, the identity of the person. [00158] Also, these techniques may be employed by the dynamic imaging system for exploratory data analysis to predict the changes of the image (e.g., aging of a face or tumor growth, etc.). For example, a dynamic analysis of the growth of a tumor may be performed using the dynamic imaging system described herein (e.g., by analyzing the increase in the surface area or the volume of the tumor). That is, using the system described herein, the volumetric changes of a tumor or lesion is capable of being measured over a time period by software subtraction algorithms of the system as explained below, and then transmitted to the treating physician. In addition, the dynamic changes in a portion of a patient's body may be compared with existing data for diagnosis or differential diagnosis so as to track and analyze trends in disease progression or disease improvement against baseline data for management of diseases. The dynamic image recognition system described herein may be configured to track changes in the disease process (e.g., diabetic retinopathy, another retinal disease, brain, spinal cord vertebrae, prostate, uterus, ovarian, intestine, stomach, extremities, lung, heart, skin disease, eczema, breast cancer, a tumor in the body, etc.) over a period of time so that the disease process is capable of being monitored by a physician, where the image may be obtained using a standard imaging system (e.g., photographs to X-ray image CT-scan, retinal images by the use of a fundus camera, OCT, etc.). Also, follow-up images may be acquired using X-ray, CT-scan, positron, MRI, ultrasound, or photoacoustic imaging, etc.

[00159] In one or more embodiments, a grid is projected over the area of interest in the dynamic image recognition system to clearly define the position of each pixelated area of the face to compare it with the image or displacement that has occurred in the process, and to superimpose images on each other by the computer executing a dynamic subtraction software algorithm to demonstrate the degree of the change and its trend during the displacement or change, thereby presenting it as a subtracted image when using a multispectral camera capable of analyzing the wide spectrum of the wavelength (images) including the infrared or near infrared wavelengths and analyze them by the software of the camera rapidly presenting it as 2-D or 3D images (refer to FIGS. 5a and 5b).

[00160] FIGS. 2a and 2b depict a grid 121 that has been projected over the facial area of a person 120 in order to track dynamic changes in the lips 122 of the person 120. In FIG. 2a, the positions of two exemplary points 116, 118 are being used to track the dynamic changes in the lips 122 of the person 120. In FIG. 2b, the positions of two exemplary points 126, 128 are being used to track dynamic changes in the lips 122 of the person 120 while the person 120 is speaking the letter "O". As shown in FIG. 2b, as the letter "O" is spoken by the person 120, grooves 124 are formed in the skin of the facial area surrounding the lips 122.

[00161] FIG. 5a illustrates the face of a person 150 with folds 152 on the facial area that are being analyzed using the dynamic imaging system described herein. FIG. 5a further depicts distance(s) 154 between facial features that are being used to track dynamic changes in facial expressions of the person 150 (e.g., between the corners of the mouth). FIG. 5b illustrates the face of the person 150 while the person 150 is smiling, and dynamic changes in the facial features of the person 150 are being analyzed as well as the trends in those features. The teeth 155 of the smiling person 150 in FIG. 5b provide new data for analysis. Also, in FIG. 5b, it can be seen that the trend distance(s) 157 (e.g., between the corners of the mouth) being tracked from non-smiling pose of FIG. 5a to the smiling pose of FIG. 5b have increased and angle "x" has increased to angle "y". FIG. 5c illustrates the face of the person 150 while the person 150 is frowning, and dynamic changes in the facial features of the person 150 are being analyzed as well as the trends in those features. The frowning of the person 150 in FIG. 5c results in enhanced folds 156. FIG. 5d illustrates a first subtracted image of the human face of the person 150, wherein the subtracted image further allows dynamic changes in the mouth 158 of the person 150 and the trends in those changes to be analyzed. FIG. 5e illustrates a second subtracted image of the human face of the person 150, wherein the subtracted image depicts enhanced folds 160 on the facial area of the person 150.

[00162] FIGS. 6a and 6b depict images of a tumor 162 being analyzed with the dynamic imaging system described herein. In particular, FIG. 6a illustrates the direction 164 and growth of a tumor 162 over time. In FIG. 6a, it can be seen that the image of the tumor 162 contains superpixels 166. FIG. 6b illustrates a subtracted image of the tumor 162.

[00163] FIG. 7a illustrates the face of a person 168 with folds 170 on the facial area that are being analyzed using the dynamic imaging system described herein while the person 168 is saying the word "Oh". During the speaking of the word "Oh" by the person 168 in FIG. 7a, the sound frequency of the person' s voice is being simultaneously analyzed. FIG. 7b illustrates the face of a person 172 with folds 174 on the facial area that are being analyzed using the dynamic imaging system described herein while the person 172 is saying the word "Ah". During the speaking of the word "Ah" by the person 172 in FIG. 7b, the sound frequency of the person' s voice is being simultaneously analyzed, and the sound frequency is being correlated with the dynamic changes occurring in the person's face (e.g., by overlaying the sound frequency curve on the facial displacement curve). [00164] When the system is equipped with a multispectral camera, the multispectral camera may be used to obtain photos either in the visible spectrum or infrared to low infrared light spectrum working as a thermographic camera, seeing deep inside the skin to recognize the status of the circulation under the skin. The infrared pattern recognition capabilities of the system record psychological functional changes occurring under the skin (such as an increase or decrease in the circulation due to sympathetic activation-deactivation) together with dynamic changes, which are not achievable with a camera having only visible light capabilities.

[00165] In one or more embodiments, the visible spectrum provides information from the surface structure during the dynamic facial changes caused by deliberate activation of the facial muscles producing skin grooves around the mouth and the eye demonstrating the physical aspects of changes in a person being photographed in a two-dimensional and/or three-dimensional manner.

[00166] The computer software of the system analyzes both of the aforedescribed facial changes and presents them as independent values that can be superimposed mathematically by the computer's software creating subtraction data indicating the changes that have occurred serving as the initial face interrogation data. This algorithm may be used subsequently for recognition of a face, body, a tumor located on the surface of the skin or inside the body imaged to recognize the extent of changed values in two or three dimensional format. The dynamic imaging system described herein may also be used also along with standard imaging systems such as X-ray, CT-scan, MRI, positron, or OCT imaging to record changes occurring in the images over a time. Because the images are pixilated bits of information that can be recorded live (e.g., for comparing an image before surgery and after the surgery), the images can be subsequently subjected to analysis with the subtraction algorithm of the computer software to decide, for example, if a tumor is completely removed or not.

[00167] As another example, when the subject frowns, skin capillaries become collapsed and folded, thereby reducing the blood flow through the collapsed and folded capillaries and the heat that is detected by the multispectral camera. The subtraction algorithm provides the exact changes in facial structure during the frowning of the subject.

[00168] In one or more other embodiments, the subtraction algorithm executed by the computer presents only the subtracted image and its mathematical weight or value, and compares it with the previously obtained subtracted image of the same structure or face to verify the identity the person and compare the initial values and the extent of the changes between the first and second captured images, and the trend of the changes that have occurred after displacement of the surface or a structure of interest, such as a tumor dimension over time and its growth trend.

[00169] In one or more embodiments, the before or after image subtraction may be performed during the imaging of a tumor (e.g., a brain tumor) using contrast agents or antibody coated nanoparticles to demonstrate the degree of involvement of the brain and the effect of surgery or therapy on the 3-D dimension of the tumor indicating whether the tumor is able to be removed or treated completely or not. In one or more embodiments, the same computer algorithm is used in machine vision, robotic vision, or drone vision to examine the before and/or after effect of an action over time and predict the trend.

[00170] In one embodiment, the before and after images of any body structure can be obtained, such as before therapy or after surgery, and the images can be compared with the images taken in a different post-medical therapy or post-operatively to evaluate improvement or worsening of a condition, or disease, infection, etc.

[00171] In one embodiment, the images are taken intraoperatively for tumor resection, orthopedic surgery, cosmetic or orthopedic surgeries, cardiology or interventional radiology or interventional cardiology, brain surgery, abdominal surgery, gynecology, etc. where the images are taken by multiple cameras, an MRI, CT-scan, ultrasound, etc. to evaluate the results immediately, or in virtual reality system zoomed in or out and to evaluate and predict the progression in the post-operative period or to evaluate improvement or worsening of a condition, or disease, infection, etc.

[00172] In one embodiment, hyperspectral or multispectral images of an agricultural field, stadium, etc. are taken by cameras mounted on a drone to compare them with those taken subsequently at any time, to observe, enhance the image and/or subtract the changes and recognize the changes or predict their development for inducing protective, corrective or preventive measures.

[00173] In one embodiment, in dynamic facial recognition, a conventional existing mathematical algorithm is not used to compare two static images and conclude the possibility or probability of them being the same, but rather the computer of the present system is specially programmed to compare two sets of dynamic images, which are composed of one static and one dynamic, that add significantly more information by dynamic changes in a two-dimensional and/or three-dimensional manner that have occurred as a result of displacements of various points (e.g., in the face of the person) and the trend of the changes obtained by the computer subtraction of the dynamic changes, where to this, two significant values that augment the weight of the computer-generated image by superimposition of the obtained data or pre and post dynamic images for subtraction analysis by adding the voice or sound waves recorded simultaneously or superimposed over the dynamic facial values, and finally confirming the data with dynamic fingerprinting that has two additional components of finger print analysis, and multispectral photography before and after pressing the finger over the transparent glass, and collapsing the capillaries of the hand or the finger that provides practically a complementary algorithm for nearly infallible identity recognition data of a person within a timeframe of 2-1/2 seconds or less.

[00174] In one embodiment, a global positioning system (GPS) system is used in transferring data remotely to recognize the time and the location of where the image or voice is transmitted, for example, to a doctor's office or a security organization, smartphone, personal computer etc., and analyzed in real-time by the software of the unit with the preexisting data to verify the identity of the person involved.

[00175] In one embodiment, the dynamic image recognition technology described herein uses a software program that not only subtracts the data obtained in the first session from the data obtained in the second session, but similarly through its subtraction algorithm, compares the sound wave of a person at each examination setting to show the difference or the trend, along with final confirmation of the data with dynamic fingerprinting technology of pressing a finger or palm of the hand over a transparent glass which is photographed before and after pressing the finger/palm with multispectral light waves to demonstrate the finger' s or palm' s circulation pattern, or its collapse after pressing it over the transparent glass, in addition to the ridges of the skin, folds, minutiae, etc. in a two-dimensional and/or three-dimensional manner for dynamic identity recognition.

[00176] The technology described herein demonstrates a way to subtract the information of a dynamic change mathematically from a dynamic recognition data of not only the face, but also extremities or a moving person or variation of the facial folds measured by a

multispectral or hyperspectral camera or color changes of the collapsed capillaries observed after pressing the finger or palm, or changes of the facial capillaries during an interrogation, or observation of a joyful or sad movie, etc., which all add value to the correct identification of a person. [00177] In one embodiment, the subtraction algorithm may also be used to predict a trend using one of the existing mathematical algorithms described below, and recognize changes or the trend for evaluation of dynamic imaging, and another dynamic imaging related to the first set, such as adding a value or one dynamic facial recognition with another one created by pressing of the face to bleach the facial capillaries or inducing an artificially induced element that changes the facial folds (e.g., created by frowning to that of the voice and dynamic changes of the mouth by speaking a vowel or smiling, etc.) in a two-dimensional and/or three-dimensional manner.

[00178] In one embodiment, the accuracy of the dynamic image recognition system may be augmented with the use of a known existing mathematical algorithm, such as an artificial neuronal network, interval finite element analysis, fuzzy logic variations, machine learning, rough set, Sorties Paradox, vector logic, etc.

[00179] In another embodiment, the multispectral or hyperspectral camera is used to obtain both spectral and spatial data from the images before and after the dynamic changes have occurred on an external area of the subject. In one embodiment, the characteristics and changes of a moving object is recorded and analyzed providing simultaneous imaging, processing, and evaluation by the software of the processor of the camera, thus sensing the simultaneous changes that are happening and the trend of the changes (e.g., on the surface or below the surface of the skin or a subject being photographed). In one embodiment, the dynamic images obtained from the surface can be combined with the information obtained by the ultrasonic unit of a laser system to provide additional information from a deeply located, internal body structure, such as bone, joints, or a moving object, etc. In one embodiment, the electronically obtained images are combined with CMOS image sensors (e.g., analyzing a subject's fingerprint can give information on the blood flow of the fingertip before or after applying pressure with the finger that collapse the fingers skin capillaries and the changes may be analyzed in real-time).

[00180] For example, a dynamic fingerprinting and/or dynamic hand recognition system will be described with reference to FIGS. 3a and 3b. Initially, as shown in FIG. 3a, a finger 130 containing numerous ridges and folds is placed on a surface of transparent glass 132 so that the finger is able to be imaged (i.e., photographed or videoed) with an infrared spectrum and/or visible spectrum of a field camera, multispectral camera, or hyperspectral camera 134. Rather than simply imaging the fingertip in FIG. 3a, the entire hand of the person also may be imaged using the camera 134. The system of FIG. 3a may also record the color of the fingertip or hand of the person prior to its placement on the surface of transparent glass 132 together with the folds and ridges of the fingertip and hand.

[00181] Then, turning to FIG. 3b, a finger 136 is illustrated prior to touching the surface of transparent glass 132 and being imaged by the field camera, multispectral camera, or hyperspectral camera 134. FIG. 3b depicts the fingertip or ball of the finger 136 with its circulations, ridges, and minutiae, which are able to be imaged using the camera 134 for highly reliable identification of the person. The infrared spectrum of the camera 134 is able to record the warm circulation of blood through the fingertip or ball of the finger 136. FIG. 3c shows the ridges of the fingertip of the finger 136, but centrally, the capillaries of the finger 136 are collapsed at the area where the fingertip or finger ball is touching the surface of transparent glass 132, which indicates that a live person is being recorded.

Advantageously, the system of FIGS. 3a-3c preserves the folds in the finger 136 or, if the whole hand is placed on the glass 132, the folds in the whole hand. In the illustrative embodiment, all of this information is recorded before and after placement of the finger 136 or hand on the glass 132, and the changes are subtracted to obtain the verification of the person' s identity, and the physical and physiological changes that have occurred are analyzed to recognize and verify the person's identity.

[00182] As another example, a dynamic fingerprinting and/or dynamic hand recognition system will be described with reference to FIGS. 4a-4c. Initially, as shown in FIG. 4a, a finger 138 containing numerous ridges and folds is placed on a surface of transparent glass 140 so that the finger 138 is able to be imaged (i.e., photographed or videoed) with an infrared spectrum and/or visible spectrum by using a plurality of cameras 142 surrounding the finger 138 for 360 degree imaging of the finger 138. In illustrative embodiment of FIG. 4a, the cameras 142 may include two cameras 142 above the surface of transparent glass 140 and one camera 142 below the surface of transparent glass 140 (e.g., a field camera or hyperspectral camera below the glass 140).

[00183] Then, turning to FIG. 4b, a finger 144 is illustrated prior to touching the surface of transparent glass 140 and being imaged by a multispectral camera or hyperspectral camera 146. FIG. 4b depicts the fingertip or ball of the finger 144 with its circulations, ridges, and minutiae, which are able to be imaged using the camera 146 for highly reliable identification of the person. The infrared spectrum of the camera 146 is able to record the warm circulation of blood through the fingertip or ball of the finger 144. Dynamic changes in the circulation and temperature are recorded by the multispectral or hyperspectral imaging system of FIG. 4b. FIG. 4c depicts the bleached vessels of the fingertip of the finger 144 as the finger 144 is pressed against the surface of the transparent glass 140. Along with the bleached vessels of the fingertip of the finger 144, the ridges and minutiae of the finger 144 are recorded by the camera 148 in FIG. 4c.

[00184] In one embodiment, the person's finger, hand, or other extremities are videoed to create more complete information regarding the identity of the person or the area of the body recorded for future dynamic comparison, etc. In addition, a voice instruction may be included in the system of FIGS. 3a-3c in order to ask the person to press harder, or loosen up his or her finger, so that the degree of the capillary bleaching is confirmed, etc.

[00185] Also, in one or more embodiments, three-dimensional dynamic data is obtained either from multiple cameras or from the mathematical analysis of the obtained digital data from the light field camera system. Advantageously, the rapid variation of the light field camera or other multispectral or hyperspectral camera in visible and infrared and existing IR sensors and IR motion sensors eliminates the problem seen with moving objects that interferes with a good static facial recognition. Additional data can be obtained on the skin and its changes during the physical or physiological dynamic imaging with an infrared camera or multispectral camera that cannot be obtained in a static image.

[00186] In addition to use in the telemedicine system 50' described above, the dynamic image recognition system described herein may also be used for other applications, such as for security system applications, use in HoloLens applications, use in other telemedicine applications (e.g., tele-imaging or tele-diagnostic systems), and use in other patient applications (e.g., two independent systems having the same components for dynamic image recognition are provided so that the patient and doctor recognize each other in a dynamic form and verified from the pre-existing dynamic image recognition obtained during the first examination for a two way communicating system).

[00187] In another embodiment, the system can be used in a personal computer to provide a security system that recognizes the person sending an e-mail, and in one embodiment, the computer is equipped with a Global Positioning System (GPS) system so that the location of the e-mail sender is revealed along with images to the receiver live or is recorded with the e- mail or during the telecommunication in the tele-medicine system.

[00188] In another embodiment, the dynamic image recognition is used in telemedicine to evaluate changes in disease processes, such as looking at the growth of various lesions over time by analyzing pixelated images using the software of the system and aiding with diagnosis and differential diagnosis or growth of a tumor. Other applications for the dynamic imaging system described herein may include specific applications involving hospitals for patient security and operating room applications, customs department, airports, the state department, police (e.g., for investigative analysis of the crowd or identification a person), the military, FBI, CIA, various other governmental agencies, and banking institutions (i.e., for account and ATM identification). The dynamic imaging algorithm may also be used in robots, drones, in agriculture applications, or in military applications. The dynamic imaging system may also be useful at stadiums of competitive sporting events, such as football, soccer, basketball, and hockey stadiums, and at other venues involving large gatherings of people for political or non-political causes, which often require some permission to guarantee the privacy and safety of the people present at the event. In addition, the dynamic imaging system may be used in smartphones for remote recognition, and in home security systems, etc.

[00189] In one or more embodiments, the dynamic imaging system may be used to observe changes over a period of time using a drone (e.g., analyzing changes over a period of time in an agricultural field by means of capturing 2-3 dimensional images using a drone). Also, the dynamic imaging system may be used to observe 2-3 dimensional changes in competitive games, such as soccer or football, and be able to analyze changes of the entire field and subtract them from the data obtained subsequently, in a very short period of time and analyze the dynamic changes or enhance the changes via a software algorithm that is analogous to finding a needle in a hay stack instantaneously.

[00190] In one or more embodiments, the dynamic image recognition system described herein may replace previous verification/identification systems used in personal life, such as passwords IDs, PINs, smart cards, etc. The dynamic facial recognition system also has unlimited applications in personal security, identification, passports, driver licenses, home security systems, automated identity verification at the airports, border patrol, law enforcement, video surveillance, investigation, operating systems, online banking, railway systems, dams control, medical records, all medical imaging systems, and video systems used during surgery or surgical photography to prevent mistakes in the operating room (e.g., mistaking one patient for another one, or one extremity for the other). The dynamic image recognition system described herein may also be used for comparative image analysis and recognizing the trends in patients during follow-up analyses and outcome prediction. The dynamic image recognition system described herein may also be used with other imaging modalities, such as X-ray, CT-scan, MRI, positron, photoacoustic technology and imaging, ultrasound imaging, video of a surgery, or any other event, etc. Further, the dynamic imaging system may be used for image and data comparison of close or remotely located objects, mass surveillance to document time related changes in the image, and/or recognizing a potential event and its trend.

[00191] The aforedescribed dynamic technology, its algorithm, software, or camera technology can be individually included or included in combination in any computer to be used for dynamic image recognition.

[00192] Advantageously, when it is utilized in a medical application, the aforedescribed dynamic image recognition system enables a physician to perform a remote telemedicine consultation with a patient who is located remotely from the physician and/or to obtain an image or a lesion located on the body using a field camera or record a 2-D or3-D image taken by X-ray, CT-scan, PET-scan, MRI, with or without infusion or ultrasound or a photoacoustic or thermoacoustic system, etc. of a lesion located in the body or outside the body of a patient. As such, the system advantageously obviates the need for the physician to be physically present in the same exact location as the patient. Thus, for example, the system enables a physician to perform an examination including conversing with a person or a patient in a different part of the United States or the world without the need for time consuming and costly traveling. The examination may be done by the physician personally, a physician assistant, or by another authorized person. Therefore, it is critical that the telemedicine system accurately verifies the patient that is receiving advice, including prescription(s), etc., so that the advice is given or an image taken from the correct patient. Further, it is critical that the remote imaging system accurately verifies that the examination is done or photos taken from the correct body portion of the intended patient so that the images and the results can be compared and the changes are subtracted to conclude an improvement or worsening of a condition when compared to previous exam and data, or as such a part of the image might be static and a part dynamic or present a static area that becomes visible after dynamic change has occurred (e.g. by asking a person to "show your teeth") and will include the initial static photo of the face prior to the dynamic change occurring on the face (e.g., after the subject shows his or her teeth) while uncovering static picture of the teeth, which remains static and by itself serves as unchanged or the stable identity part of the person's image while the dynamic change affects the lips with enhancing wrinkles around the mouth and the face and all that can be subtracted from the initial static image for analysis and ultimate recognition of a person.

[00193] Any of the features or attributes of the above described embodiments and variations can be used in combination with any of the other features and attributes of the above described embodiments and variations as desired.

[00194] Although the invention has been shown and described with respect to a certain embodiment or embodiments, it is apparent that this invention can be embodied in many different forms and that many other modifications and variations are possible without departing from the spirit and scope of this invention.

[00195] Moreover, while exemplary embodiments have been described herein, one of ordinary skill in the art will readily appreciate that the exemplary embodiments set forth above are merely illustrative in nature and should not be construed as to limit the claims in any manner. Rather, the scope of the invention is defined only by the appended claims and their equivalents, and not, by the preceding description.

[00196] The invention claimed is: