Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTICAL HEAD-MOUNTED DISPLAY WITH AUGMENTED REALITY FOR MEDICAL MONITORING, DIAGNOSIS AND TREATMENT
Document Type and Number:
WIPO Patent Application WO/2017/120288
Kind Code:
A1
Abstract:
Optical head mounted display system includes a headset which positions a transparent screen in alignment with the line of sight of at least one eye of a wearer of the headset. Data and images can be presented on the transparent display screen. A video camera is mounted to the headset and is positioned to capture video images of a scene coincident with the line of sight of the wearer of the headset. A computer processor mounted in the headset is configured to receive video image data obtained from the video camera; process the video image data to obtain a pulse of a person; receive wirelessly transmitted patient data from a patient module connected to one or more patient sensors which are directly connected or attached to the person; correlate the transmitted patient data with the person in the scene by comparing the pulse derived from the video image, with a second pulse specified in the transmitted patient data; and display at least one data element from the transmitted patient data on the transparent screen based on the correlating step.

Inventors:
KILLCOMMONS PETER (US)
KING TIMOTHY (US)
VENEMA JEROD (US)
Application Number:
PCT/US2017/012266
Publication Date:
July 13, 2017
Filing Date:
January 05, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NEXSYS ELECTRONICS INC (US)
International Classes:
G02B27/01; A61B5/1455; A61B34/20; A61B90/30; A61B90/50; G06T5/40; H04N5/20
Domestic Patent References:
WO2015126466A12015-08-27
WO2015110859A12015-07-30
WO2016014384A22016-01-28
Foreign References:
US20150005644A12015-01-01
US20150360038A12015-12-17
US7774044B22010-08-10
Attorney, Agent or Firm:
SACCO, Robert, J. (US)
Download PDF:
Claims:
CLAIMS

We claim:

1. An optical head mounted display system, comprising:

a headset which positions a transparent screen in alignment with the line of sight of at least one eye of a wearer of the headset;

at least one video camera mounted to the headset and which is positioned to capture video images of a scene coincident with the line of sight of the wearer;

a computer processor mounted in the headset which is configured to

receive video image data obtained from the at least one video camera;

process the video image data to obtain a pulse of a person in the scene observable by the wearer in accordance with the line of sight;

wirelessly receive transmitted patient data from a patient module connected to one or more patient sensors which are directly connected or attached to the person;

correlate the transmitted patient data with the person in the scene by comparing the pulse derived from the video image, with a second pulse specified in the transmitted patient data; and display at least one data element from the transmitted patient data on the transparent screen based on the correlating step.

2. The system according to claim 1, wherein the process applied to the video image data to obtain the pulse comprises the Eulerian Video Magnification (EVM) algorithm.

3. The system according to claim 1, wherein the transmitted patient data is selected from the group consisting of heart rate or pulse, respiration rate, blood pressure, blood oxygen levels, blood type and body temperature.

4. The system according to claim 3, wherein the transmitted patient data further includes data stored in the patient module, including one or more data elements selected from the group consisting of patient name, age, room number, insurance information, bed location, guardian contact information, diagnosis, injury, medications, time when most recent medication was received, and known drug allergies.

5. The system according to claim 1, wherein the at least one video camera is comprised of a plurality of video cameras, and wherein the video images that are captured comprise at least one of two-dimensional image data and three-dimensional image data.

6. The system according to claim 1, wherein the at least one video camera is configured to generate video imagery from light in the visible wavelength range and in the non- visible wavelength range.

7. The system according to claim 1, wherein the at least one processor performs video analytic operations to automatically identify a portion of the person where a physiological parameter is reliably measured.

8. The system according to claim 1, further comprising a laser source capable of illuminating at least a portion of the person, and an optical detector to capture scattered light reflected from the skin of the person, and wherein the at least one processor is configured to determine one or more physiological parameters based on the scattered light.

9. The system according to claim 8, wherein the one or more physiological parameters are selected from the group consisting of oxygen saturation and C02 saturation.

10. The system according to claim 1, wherein the at least one processor is configured to process the video data to identify a type of medication appearing in the video data comprising a tablet or capsule.

11. The system according to claim 8, wherein the at least one processor is configured to further identify based on the video data a dose of the type of medication which has been identified.

12. The system according to claim 1, wherein said processor is configured to receive imagery data, position data and tracking data from a medical scanning device disposed adjacent to the person, and to cause the received imagery data from the medical scanning device to be displayed on the transparent screen in alignment with the line of sight of at least one eye of a wearer of the headset.

13. The system according to claim 12, wherein said imagery data is presented on a portion of said transparent screen in a location which is dynamically varied in accordance with a position of the medical scanning device.

14. The system according to claim 12, wherein the processor is configured to receive stored scan imagery obtained from a remote database concurrent with the operation of the scanning device on the person.

15. The system according to claim 14, wherein the processor is configured to display the stored scan imagery on the transparent screen concurrent with the imagery obtained from the scanning device.

16. The system according to claim 1, wherein the processor is configured to analyze the video image data to determine a position and orientation of the person within a field of view of the at least one video camera.

17. The system according to claim 16, wherein said processor is configured to display on the transparent screen stored scan imagery from a remote database in a correct anatomical orientation relative to the position and orientation of the person.

18. The system according to claim 16, wherein the processor is configured to display on the transparent screen a correct location and angle of instruments which are to be inserted into the person for at least one of arthroscopic or endoscopic surgery.

19. The system according to claim 16, wherein the processor is configured to display on the transparent screen a correct position and orientation of a prosthetic device.

Description:
OPTICAL HEAD-MOUNTED DISPLAY WITH AUGMENTED REALITY FOR MEDICAL MONITORING, DIAGNOSIS AND TREATMENT

BACKGROUND OF THE INVENTION

Statement of the Technical Field

[0001] The inventive arrangements relate to wearable computers and displays. More particularly the inventive arrangements concern augmented reality optical head mounted displays for integrating electronic data with real world environments.

Description of the Related Art

[0002] There is a continuing need to improve the monitoring, diagnosis and treatment of patients in health-care environments. In recent years, optical head-mounted displays have been identified as having potential use in such healthcare applications. Optical head-mounted displays which can be worn like a pair of eyeglasses are commercially available. These types of displays are sometimes referred to as smartglasses, digital eye glass or personal imaging systems.

Various technologies are used to facilitate displays of this kind. Some of these technologies are similar to those used in heads-up displays for aircraft and automobiles, where data is presented to a user on a transparent screen so that the user can view the data concurrently with physical world views observed through the transparent screen. Exemplary display technologies for these systems can involve use of optical waveguides or scanning lasers to produce images on a clear transparent medium. Newer micro-display imaging technologies of this kind have also been proposed which utilize liquid crystal displays (LCD), liquid crystal on silicon (LCOS), digital micro-mirrors (DMD) and organic light-emitting diodes (OLED).

SUMMARY OF THE INVENTION

[0003] Embodiments of the invention concern an optical head mounted display system. The system is comprised of a headset which positions a transparent screen in alignment with the line of sight of at least one eye of a wearer of the headset. Data and images can be presented on the transparent display screen. A video camera is mounted to the headset and is positioned to capture video images of a scene coincident with the line of sight of the wearer of the headset. A computer processor is mounted in or on the headset which is configured to perform various actions. These actions include receiving video image data obtained from the video camera; processing the video image data to obtain a pulse of a person in the scene observable by the wearer in accordance with the line of sight; receiving wirelessly transmitted patient data from a patient module connected to one or more patient sensors which are directly connected or attached to the person; correlating the transmitted patient data with the person in the scene by comparing the pulse derived from the video image, with a second pulse specified in the transmitted patient data; and displaying at least one data element from the transmitted patient data on the transparent screen based on the correlating step.

[0004] In the system as described herein the process applied to the video image data to obtain the pulse comprises the Eulerian Video Magnification (EVM) algorithm. According to one aspect, the transmitted patient data is selected from the group consisting of heart rate or pulse, respiration rate, blood pressure, blood oxygen levels, blood type and body temperature. The transmitted patient data can also include data (other than data acquired by the patient sensors) which has been previously inputted or stored in the patient module. Such additional transmitted patient data can include one or more of patient name, age, room number, insurance information, bed location, guardian contact information, diagnosis, injury, medications received, time when most recent medication was received, and known drug allergies, without limitation.

[0005] In some embodiments, the system described herein can include a plurality of video cameras. In such scenarios, the resulting video images that are captured can comprise at least one of two-dimensional image data, three-dimensional image data, or four-dimensional image data. Such video cameras can be configured to generate video imagery from light in the visible wavelength range and in the non- visible wavelength range.

[0006] In some embodiments, the at least one processor included in the optical head mounted display advantageously is configured to perform certain video analytic operations to

automatically identify a portion of the person where a physiological parameter is reliably measured. [0007] A laser source included with the system can be capable of illuminating at least a portion of the person. An optical detector can similarly be provided to capture scattered light reflected from the skin of the person. In such scenarios, the at least one processor can be configured to determine one or more physiological parameters based on the scattered light. For example, the one or more physiological parameters can be selected from the group consisting of oxygen saturation and C02 saturation.

[0008] The system can serve other purposes as well. For example, the at least one processor can be configured to process the video data to identify a type of medication appearing in the video data comprising a tablet or capsule. The processor can further identify based on the video data a dose of the type of medication which has been identified. This information can be used by the health care practitioner to verify that the correct medication and dose is being administered to a patient.

[0009] In some embodiments, the processor is configured to receive one or more of imagery data, position data and tracking data from a medical scanning device disposed adjacent to the person. In such scenarios, the processor in the optical head mounted display system can cause the received imagery data from the medical scanning device to be displayed on the transparent screen of the display in alignment with the line of sight of at least one eye of a wearer of the headset. For example, the imagery data can be advantageously presented on a portion of said transparent screen in a location which is dynamically varied in accordance with a position of the medical scanning device.

[0010] In other embodiments the at least one processor is configured to receive stored scan imagery obtained from a remote database concurrent with the operation of the scanning device on the person. In such scenarios, the processor can be configured to display the stored scan imagery on the transparent screen concurrent with the imagery obtained from the scanning device for comparison purposes.

[0011] Using video analytics, the processor can analyze the video image data to determine a position and orientation of the person within a field of view of the at least one video camera.

Based on this analysis, the processor can cause to be displayed on the transparent screen stored scan imagery from a remote database in a correct anatomical orientation relative to the position and orientation of the person. Alternatively, the processor can be configured to display on the transparent screen a correct location and angle of instruments which are to be inserted into the person for at least one of arthroscopic or endoscopic surgery. In yet another embodiment, the processor can be configured to display on the transparent screen a correct position and orientation of a prosthetic device.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] Embodiments will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures, and in which:

[0013] FIG. 1 is a block diagram that is useful for understanding a optical head-mounted display computer system.

[0014] FIG. 2 is a conceptual diagram that is useful for understanding the operation of the optical head-mounted display computing system.

[0015] FIG. 3 is a drawing that is useful for understanding the information that is presented to a wearer of the optical head-mounted display computing systems.

[0016] FIG. 4 is a flowchart that is useful for understanding the invention.

DETAILED DESCRIPTION

[0017] The invention is described with reference to the attached figures. The figures are not drawn to scale and they are provided merely to illustrate the instant invention. Several aspects of the invention are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. However, the invention can be practiced without one or more of the specific details or with other methods. In other instances, well-known structures or operation are not shown in detail to avoid obscuring the invention. The invention is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the invention.

[0018] Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one

embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout the specification may, but do not necessarily, refer to the same embodiment.

[0019] Referring now to FIG. 1, there is shown an optical head-mounted display (OHMD) which can be worn like a pair of eyeglasses for presenting data and/or images to a user in a hands-free format. The OHMD 100 can include various components. These may include a power source (not shown), a computer processor (e.g. a microprocessor) 112, and a data storage unit (memory) 106. The OHMD 100 can also include a transparent optical display unit (TODU) (i.e. a heads-up display) 102, and one or more short range data transceivers 114, 116 which are capable of communicating in accordance with a short range wireless data communication standard. For example, such data transceivers can be compatible with the well-known Bluetooth communication standard and/or IEEE 802.1 lb/g (WiFi) standard.

[0020] The computer processor 112 in an exemplary optical head-mounted display as described herein may be under the control of an operating system 113 such as the ubiquitous Android operating system. Of course, embodiments are not limited in this regard and any other computer operating system can be used for this purpose. A data bus 122 is provided to facilitate communications of data among the various components. The data storage unit 106 comprises a computer readable medium 110 on which certain instructions 108 are stored for the

microprocessor. For example, one or more software applications (apps) 108 can be stored in the data storage unit 106 to facilitate performance of certain device operations and functions. A user input device 104 can comprise one or more switches or controls which are responsive to user touch activation. The touch switches are used to facilitate control of certain actions performed by the computer processor 112.

[0021] The computer-readable storage medium 110 should be understood to include a single medium or multiple media that store the one or more sets of instructions. The term "computer- readable storage medium" shall also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term "computer-readable medium" shall accordingly be taken to include, but not be limited to one or more read-only (non- volatile) memories, random access memories, or other re-writable (volatile) memories. Accordingly, the disclosure is considered to include any one or more of a computer- readable medium as listed herein and to include recognized equivalents and successor media, in which the software implementations herein are stored.

[0022] An OHMD 100 as described herein can collect information from sensors which are internal or external to the device. Exemplary sensor components which can be included into the OHMD include one or more video camera(s) 118 and infrared sensor(s) 120. The one or more video cameras 118 can be a conventional video camera or a stereoscopic video camera. If more than one video camera 118 is provided, image data from the video camera or camera array can facilitate generation of conventional two-dimensional (2D) imagery, three-dimensional (3D) imagery, and/or 4D imagery in which time, temperature or some other parameter is the fourth dimension.

[0023] The one or more video cameras 118 can be comprised of sensors which are capable of generating images based on visible light, non- visible light, or both. For example, one or more of the video cameras 118 can be comprised of image sensors that are capable of capturing images in the near infrared, far infrared and/or ultraviolet spectrum. As explained below in further detail, the image data that is collected by the video camera or camera array can facilitate basic functions such as patient detection (e.g. facial recognition), measurement of patient vital signs (e.g. patient heartbeat, blood pressure), and patient identification (e.g., using facial recognition methods). Further, the image data can facilitate detection of physiological changes, the presence of foreign substances. Other sensors which can be provided in the OHMD 100 include gyroscopes 103, accelerometers 105, magnetometers (compass), ambient light sensors, and audio transducers 123. The purpose of these components will become more apparent as the discussion progresses.

[0024] Those skilled in the art will appreciate that the computer system architecture illustrated in FIG. 1 is one possible example of an OHMD computer system. However, the invention is not limited in this regard and any other suitable computer system architecture can also be used without limitation. Dedicated hardware implementations including, but not limited to, application- specific integrated circuits, programmable logic arrays, and other hardware devices can likewise be constructed to implement the methods described herein. Applications that can include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments may implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit. Thus, the exemplary system is applicable to software, firmware, and hardware implementations .

[0025] In accordance with various embodiments of the present invention, the methods described herein are stored as software programs in a computer-readable storage medium 110 and are configured for running on the computer processor 112. Furthermore, software implementations can include, but are not limited to, distributed processing, component/object distributed processing, parallel processing, virtual machine processing, which can also be constructed to implement the methods described herein. In the various embodiments of the present invention data transceivers 116, 118 can facilitate data communications with a network environment in accordance with instructions 108 to facilitate such processing.

[0026] The computer processor 112 continuously analyzes video image data acquired by the video camera 118. The video analysis can be used to facilitate various functionalities.

According to one aspect, the video analysis involves recognition of the presence of a human face. In such scenarios, the computer processor can be configured with facial recognition software which is capable of identifying the presence of human faces in a scene that is captured by the video camera. Facial image analytics and facial recognition algorithms are well known in the art and therefore will not be described here in detail. However, it is sufficient to note that the computer processor 112 can determine the presence of a person in a video image frame based on such analysis. In some embodiments, the process can further involve comparing the facial image to a database of stored facial image data which can be used to identify the person associated with the face. The purpose of such identification is described below in further detail.

[0027] Other video analytics functions performed by the computer processor 112 can involve identification of certain objects which may be pertinent to the medical practitioner. For example, the video analytics can automatically identify various medications when presented in pill form. The term "pill" as used herein can include without limitation tablets, capsules and variants thereof, including hard-shelled capsules and soft-shelled capsules. As is known, medications of this kind can have a unique appearance to avoid confusion among different medication types. As such, the pills can have a shape, a size, a color, a color pattern, and markings (including alphanumeric markings) which can be used to differentiate various medications. These visual cues can be detected by the one or more imaging devices (i.e., e.g., video cameras 118) and analyzed with the video analytics software to identify the medication type. For example, the video analytics operations can differentiate between a Valium pill and a Viagra pill. The video analytics operations described herein can also be configured to differentiate between different dosage concentrations contained in various pills if such information is derivable from the shape, size, color, color pattern and/or markings on the pill. It should be appreciated that the video imagery captured for use in connection with these identification operations need not be limited to the visible light spectrum. Instead (or in addition to such visible light imagery) the video cameras 118 can capture images in non- visible light spectrum to help facilitate and/or verify such medication identification.

[0028] As explained below in further detail, the results of the video analytics operations described herein can be presented to the medical practitioner by means of the transparent optical display unit 102, Alternatively, an audio transducer 123 can be used to audibly annunciate such information. For example, when a medical practitioner orients the one or more video cameras 118 so that their field of view is directed toward a pill held in a tray, an audio transducer can be used to audibly annunciate a medication type which has been detected based on the video analysis. If more than one pill type is present, this information can be similarly communicated to a medical practitioner. Such pill identification can help prevent errors with respect to medication type and dosing.

[0029] The OHMD 100 also uses video data acquired by video camera 118 to determine the pulse of an individual or patient 202 who is being observed by the wearer through the transparent display screen 206 of the OHMD 100. This information is then displayed to the wearer of the OHMD in the TODU 102 in real time. The pulse information is advantageously derived by computer processor 112 by using video analytics which measure temporal changes in color and/or other variations occurring at the patient's skin to accurately estimate heart rate. The foregoing technique is known as the Eulerian Video Magnification (EVM) algorithm and consists of two sequential processing steps.

[0030] In a first step, a spatial filtering is applied to the video frames to suppress high frequency video noise. In a second step, each pixel is analyzed over a period of time and certain movements and/or color variations in the skin are evaluated using filtering techniques. The algorithm facilitates accurate determination of a person's pulse rate without direct contact and based only on video analysis. This patient data or information is then displayed to the individual wearing the OHMD 100 via a transparent screen 206 of the TODU 102 that overlays the field of vision of the wearer. The displayed patient data 208 is conceptually shown in FIGs. 2 and 3. The pulse rate data determined using the Eulerian algorithm can be further used to derive blood pressure information for the observed patient. This blood pressure information can also be displayed on the transparent screen 206.

[0031] The OHMDS 100 can include a combination of different optical emitters (e.g. lasers) and optical sensors to facilitate measurements pertaining to the patient's body. For example, one or more optical emitters 119 can output one or more laser beams. These laser beams can have an optical wavelengths in the near infrared and/or far infrared frequency range which are used for certain measurement purposes. As discussed below in further detail, the optical emitters 119 can also include a laser beam having an optical wavelength in the visible spectrum. As explained below, such a laser beam in the visible spectrum can facilitate aiming the near and far infrared optical emitters (which are in the non-visible range) to portions of the patient body which provide reliable optical measurement results. The optical sensors can then detect and analyze the scattered or reflected light from the patient to measure various patient conditions. For example, principles of spectro-photometry and/or plethysmography can be applied to measure the percentages of oxyhemoglobin and deoxyhemoglobin in the blood. As is known, light emitted at 660 nm is better absorbed by saturated (oxygenated) hemoglobin and light emitted at 940 nm is better absorbed by reduced (deoxygenated) hemoglobin. So the detected scattered or reflected light from the laser(s) can be used to determine blood Oxygen saturation.

[0032] The use of such optical measuring techniques can be combined with principles of augmented reality (AR) to automatically identify a standardized or preferred location on the patient's body that will reliably provide consistent readings for parameters such as temperature, blood pressure, and Oxygen saturation or Co2 saturation. For example, a video marker or telestration in combination with a visible laser can be used to facilitate aiming of the optical laser(s) and optical sensors described herein. Video analytics can be used to identify a preferred location on the patient's body where an optical measurement is known to provide reliable results. Once such location has been identified, the video marker or telestration can be displayed on the transparent optical display unit 102, so that it appears overlaid on the particular portion of the patient's body which is predetermined to provide such reliable readings. The medical practitioner who is wearing the OHMDS 100 can then adjust their head position so that a visible laser beam emitted from the OHMDS 100 impinges upon the patient's body at the marked location.

[0033] According to a further aspect, the pulse rate data (and optionally the blood pressure information) derived by the Eulerian algorithm is used to correlate or identify certain additional or secondary information pertaining to the patient who is being visually observed. As shown in FIG. 2, one or more wearable sensors 210 can be disposed on or attached to the patient 202 to collect health-related data. Such health related data can include pulse rate and a variety of other secondary health-related vital information data derived from the wearable sensors 210. This data is collected at a patient module 204 and then transmitted using a personal area networks such as Bluetooth or WiFi that is compatible with at least one the wireless data transceivers 116, 118 provided in the OHMD. The patient module can broadcast this information directly from the module itself using an included wireless transceiver (not shown) which is contained therein. According to one embodiment, the information can be broadcast periodically. Alternatively, the information can be broadcast in response to certain conditions. For example, the information can be broadcast only when the patient module 204 detects the presence of the OHMD 100. The presence of the OHMD can be detected based on RF signal emission from the OHMD 100 (e.g., Bluetooth or WiFi signals) or by any other suitable means.

[0034] The transmitted patient data (TPD) is received by the OHMD 100 and is analyzed to determine whether the received data pertains to the patient who is being observed by the wearer of the OHMD through the transparent display lenses of that device. In this regard it should be understood that an OHMD in a healthcare environment may concurrently be receiving TPD from a multiplicity of different patient modules. Accordingly, it is important to ensure that the correct TPD information is displayed for the OHMD wearer when a particular patient is observed. The correlation function is performed by comparing the pulse rate data derived by the Eulerian algorithm to the pulse rate data contained in the TPD. Once a particular TPD set has been correlated to the patient who is being observed, secondary information contained in the TPD can be displayed as shown in FIG. 3 by using the transparent display screen 206. In this way, simply by looking at or towards a patient through the transparent display screen 206, the TPD is caused to be correlated with the observed patient. The TPD data provides a more complete overview of the patient information and vitals. The process also allows a particular patient to be located relative to other patients in the same room. Advantageously, the information collected at the OHMD is easily associated with the individual by using visual telestration on the lens to indicate which individual identified is the source of the health information presented.

[0035] The TPD can include a wide variety of patient information, including identifying information, secondary vital signs and/or health related information. According to one aspect, the transmitted patient data is selected from the group consisting of heart rate or pulse, respiration rate, blood pressure, blood Oxygen levels, blood type and body temperature. The transmitted patient data can also include data (other than data acquired by the patient sensors 210) which has been previously inputted or stored in the patient module. Such additional transmitted patient data can include patient name, age, room number, insurance information, bed location, guardian contact information, diagnosis, injury, medications received, time when most recent medication was received, and known drug allergies, without limitation.

[0036] According to a further aspect, the correlation process described herein can be enhanced by using a long-range infrared sensor 120 to determine a patient's temperature. Long range infrared thermal sensors are well known in the art and therefore will not be described here in detail. However, it will be understood that the long range infrared thermal sensor can be used to remotely determine a temperature of a person. The temperature information can be captured at the OHMD and then used together with the pulse rate data derived by the Eulerian algorithm to correlate the data measured at the OHMD 100 with the TPD. According to a further aspect, the facial image recognition process described above can be used as a further means to verify the correlation process.

[0037] An OHMD as described herein can also use other sensors to detect additional vital parameters about the patient being observed, including surface temperature, calculated core temperature, size and shape of lesions, induration, inflammation, and other parameters helpful to the health practitioner. The glasses can also incorporate additional sensors to detect chemicals, odors, etc. useful for detecting explosives, or dangerous gases.

[0038] According to another aspect, artificial intelligence (AI) and video analytics can be used to constantly monitor the image data acquired by the OHMD 100 and associated with each patient 202. The AI and video analytics is advantageously configured to perform health diagnostic functions which are designed to identify the presence of disease conditions. For example, video analytics can be used to identify the presence of certain types of skin cancers (e.g. melanoma). In other embodiments, chemical sensors could be used to detect the occurrence of ketoacidosis which is associated with patients who are suffering from diabetes. When such a medical condition is detected, the AI can automatically alert the wearer of the OHMD 100 using telestration, audio prompts or other indicia to identify the presence of such disease condition.

[0039] The OHMD 100 also incorporates the ability for a remote collaborator to see the camera output of the glasses, so that the remote collaborator can see exactly what the wearer sees. The remote collaborator can then use a finger or stylus to write on the live view. This causes a telestration to appear as an overlay on the transparent display screen 206, so that the remote observer is able to guide, point out, or provide "Over the shoulder" advice, using the telestration to point out specific objects of interest in the field of view. Alternatively (or in addition thereto) context sensitive differential diagnosis and treatment suggestions can be delivered to the OHMDS from artificial intelligence tools, such as IBM Watson or similar systems to assist in rapid diagnosis and suggested medication from existing formulary.

[0040] Collaborative medical telestration takes this concept further by incorporating analytics and stereoscopic measurements using sensors that are mounted on the glasses. These sensors give a back end analytics computer the ability to measure, document , and suggest suitable components during surgery or other health procedures that derive from the combination of telestration to show the analytics engine the area of interest, and then cause it to share relevant clinical information about that area.

[0041] According to yet another aspect, the OHMD 100 can make use of remote analytics to select relevant imaging or data from the patient's medical chart and display that data on the transparent display screen 206. This information is accessed by means of data transceiver 116 and/or data transceiver 118 from a remote server. The data can then be displayed, navigated, or dictated using integrated voice recognition and head gestures. For example, the ability to display relevant X-rays or ultrasound data is achieved via a head movement to launch the viewer, and a second to "stow" the viewer. The head movements can be detected by means of one or more gyroscopes 103, accelero meters 105 or any other suitable means. The data viewer may also be visualized using only voice commands which are detected by audio transducer 123. The voice command processing described herein, as well as the video analytics, can be performed using computer processor 112, or can be performed with the assistance of a remote server.

[0042] Fiducial markers, microelectromechanical systems (MEMS) sensors (such as digital compasses and six-degree of freedom accelerometer gyroscopes can be used to track a position of a medical device (e.g. an ultrasound imaging head). Simultaneous localization and tracking (SLAM) techniques using markerless tracking methods (such as Parallel Tracking and Mapping PTAM) are also available for this purpose. As is known, PTAM is a camera tracking system for AR which requires no markers, pre-made maps, known templates, or inertial sensors.

Collectively, all of these methods are referred to herein as position tracking methods. [0043] One or more of the position tracking methods described herein can be used to data to facilitate insertion of a virtual display of the output of a position tracked medical device so that the display is convenient to the area of operation of the clinician. For example, when a clinician is conducting an ultrasound, the processor 112 can receive real-time ultrasound images output from the ultrasound device. The processor can then cause OHMDS 100 to display the output image generated by the ultrasound device. The output images can be presented as a floating image that is positioned near the ultrasound probe in the field of view of the medical practitioner who is viewing the scene through the transparent optical display unit 102. With such an arrangement, the sonographer can avoid the need to turn away from the patient to view the output.

[0044] In an embodiment, the processor can receive from a remote database imagery captured at an earlier time involving a previous ultrasound scan of the same body location which is currently undergoing ultrasound scanning. Such imagery from a previous scan can be correlated with the current position of the ultrasound scanning head and automatically displayed in the transparent optical display unit for comparison purposes.

[0045] In a further embodiment, the OHMDS 100 can facilitate certain types of medical procedures by automatically displaying transverse slices or 3D representations of underlying organs of the body. More particularly, the processor 112 can utilize video analytics to determine a portion of a patient's body which is being observed and a point of view of such observation. The processor 112 can also have remote access to database imagery specific to the patient. Such imagery can include imagery that has been obtained using one more techniques such as Magnetic Resonance Imaging (MRI), computerized tomography (CT), positron emission tomography (PET), or ultrasound scanning. The stored imagery data can then be overlaid upon the patient in the transparent optical display unit 102.

[0046] In such an embodiment, the medical practitioner will be presented with a live view of the patient observed through the transparent optical display. Overlaid on such live view of the patient will be the stored imagery data, which is automatically rotated and scaled to its correct anatomical position relative to the position of the patient in the field of view of the medical practitioner. For example, the overlaid imagery can be advantageously presented to the medical practitioner overlaid on a patient surgical field to better identify the specific location of the underlying organ, tumor, or disease process of interest. Such information can be useful for the medical practitioner in the course of exploratory surgeries like biopsies or excision of tumors, placement of prosthetics, and so on. Similarly, the processor 112 can be configured to automatically identify and visually indicate a correct position of prosthetic device components such as hips, knees, acetabular cups, and so on to assist in the process of reconstructive surgery.

[0047] A further enhancement is contemplated in various embodiments wherein the processor 112 calculate and causes to be displayed to the medical practitioner, using OHMDS 100, certain information pertinent to the surgical procedure. For example, the processor can utilize video analytics to automatically identify a portion of a patient's body observed through the OHMDS where a medical procedure is to be performed. This process can be facilitated by use of fiducial markers disposed on the patient. Once the position and orientation has been determined for the observed portion of the patient's body, the processor can generate overlay imagery. Such imagery, when viewed by the medical practitioner through the transparent optical display unit, will appear in the medical practitioner's field of view in the surgical field.

[0048] In some embodiments, such imagery can graphically mark a predetermined location and angle/orientation of instruments which are to be inserted for arthroscopic or endoscopic surgeries. The surgeon or medical practitioner can then use the graphical display to assist in positioning and orienting such instruments. In some embodiments, these techniques can provide a virtual endoscopy view to the medical practitioner that is updated in real time as the instrument is advanced into the patient.

[0049] For example, the combination of virtual colonoscopy data from a CT study can be combined with a live imagery from an actual colonoscopy in order to guide the wearer of the OHMDS 100 to the correct location of a specific lesion that needs to be removed. This process is performed by using the processor 112 to compare the calculated distances in the virtual colonoscopy which is needed in order to access the lesion, to the real distance that the colonoscope has been advanced. The imagery associated with the virtual colonoscopy can be presented on the transparent display of the OHMDS 100 along with the real video view from the scope, and a real-time map overview indicating the scope location in the colon. [0050] A further use of the accelerometer and SLAM data can involve identifying a region outside of the current area of interest in which to display relevant controls to manipulate AR interactions. For example, looking 10 degrees above a horizon line parallel with the floor would reveal relevant medical device controls which can be activated using hand gestures in front of the display.

[0051] The OHMDS 100 can also facilitate other useful actions in a surgical environment to aid and assist a medical practitioner. For example, an embodiment OHMDS can display an automated checklist and count of devices, sponges, and so one which are required in the due course of surgery.

[0052] In addition to the various patient care functions described herein, the OHMDS 100 can facilitate other functions and activities in a healthcare environment. For example the OHMDS can have stored in memory (e.g., data storage unit 106) information concerning a specific location or locations in a facility where certain resources can be found. This information can be particularly important with respect to locations of certain types of equipment that can be needed on very short notice to deal with medical emergencies. An example of equipment of this type can include an automated defibrillator. When information about the location of such equipment is requested by a user, the processor can cause to be displayed in the transparent optical display unit 102 directional prompts (e.g. audio and/or visual prompts) to guide the wearer to the sought after medical resource. In some scenarios, where it is anticipate that the emergency equipment may be periodically moved to different locations (e.g. to different parts of a facility) it can be advantageous to incorporate a tracking and reporting device (such as a GPS tracker) within the emergency equipment. The device can then periodically report on its position within the facility to a central database. Such location information can then be remotely accessed by the OHMDS 100 when needed by using one or more of the data transceivers 114, 116. Once the current location of the emergency equipment is detected, the wearer of the OHMDS can be directed to such equipment as described herein.

[0053] Although the invention has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present invention should not be limited by any of the above described embodiments. Rather, the scope of the invention should be defined in accordance with the following claims and their equivalents.

[0054] While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Numerous changes to the disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit or scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above described embodiments.

Rather, the scope of the invention should be defined in accordance with the following claims and their equivalents.