Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD TO DETECT AND MEASURE A WOUND SITE ON A MOBILE DEVICE
Document Type and Number:
WIPO Patent Application WO/2022/248964
Kind Code:
A1
Abstract:
A system for measuring a wound site including an image capture device, a touchscreen, a vibration motor, and a processor. The image capture device may be configured to capture a digital image and capture a depth map associated with the digital image. The processor may be configured to determine a bounding box of the wound site, determine a wound mask, determine a wound boundary, determine whether the wound boundary is aligned within a camera frame, and generate a wound map. The processor and vibration motor may be configured to provide a series of vibrations in response to the processor determining that the wound is aligned within the camera frame.

Inventors:
EDLUND CHESTER (US)
LAWRENCE BRIAN (US)
SANDROUSSI CHRISTOPHER J (US)
LAIRD JAMES (US)
Application Number:
PCT/IB2022/054468
Publication Date:
December 01, 2022
Filing Date:
May 13, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KCI MFG UNLIMITED COMPANY (IE)
International Classes:
G06T7/00; A61B5/00; G06T7/11; G06T7/136; G06T7/194; G06T7/62
Foreign References:
US20190388057A12019-12-26
US20150150457A12015-06-04
Other References:
JUSZCZYK JAN MARIA ET AL: "Wound 3D Geometrical Feature Estimation Using Poisson Reconstruction", IEEE ACCESS, IEEE, USA, vol. 9, 30 October 2020 (2020-10-30), pages 7894 - 7907, XP011830917, DOI: 10.1109/ACCESS.2020.3035125
WANG CHUANBO ET AL: "Fully automatic wound segmentation with deep convolutional neural networks", SCIENTIFIC REPORTS, vol. 10, no. 1, 1 December 2020 (2020-12-01), XP055835732, Retrieved from the Internet DOI: 10.1038/s41598-020-78799-w
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for measuring a wound site, comprising: an image capture device; a touchscreen; a vibration motor; and a processor; wherein the image capture device is configured to: capture a digital image, and capture a depth map associated with the digital image; wherein the processor is configured to: determine a bounding box of the wound site by processing the digital image with a first trained neural network, determine a wound mask by processing the digital image with a second trained neural network, determine a wound boundary from the wound mask, determine whether the wound boundary is aligned within a camera frame, and generate a wound map from the depth map, wound mask, and wound boundary; wherein the processor and vibration motor are configured to: provide a series of vibrations at the vibration motor in response to the processor determining that the wound is aligned within the camera frame.

2. The system of claim 1, wherein the processor is further configured to: scale the digital image to generate a scaled image; pad the scaled image to generate a padded image; determine a plurality of possible wound regions by inputting the padded image into the first trained neural network; and determine the bounding box by applying a non-maximum suppression algorithm to the plurality of possible wound regions.

3. The system of claim 2, wherein the processor is further configured to: select a selected possible wound region from the plurality of possible wound regions, wherein the selected possible wound region has a highest objectiveness score within the plurality of possible wound regions; move the selected possible wound region to a set of probable wound regions; calculate an Intersection over Union of the selected possible wound region with each possible wound region in the plurality of possible wound regions; and remove each possible wound region in the plurality of possible wound regions having an Intersection over Union with the selected possible wound region lower than a threshold.

4. The system of claim 1, wherein the processor is further configured to: scale the digital image to generate a scaled image; normalize pixel values of the scaled image to generate a normalized image; determine a raw mask by inputting the normalized image into the second trained neural network; and transform the raw mask into a wound mask.

5. The system of claim 4, wherein the processor is further configured to: select a pixel value threshold; set values of raw mask pixels greater than the pixel value threshold to 1 ; and set values of raw mask pixels not greater than the pixel value threshold to 0.

6. The system of claim 5, wherein the processor is further configured to: set values of raw mask pixels having a value of 1 to 255; and output the raw mask as the wound mask.

7. The system of claim 1, wherein the processor is further configured to: generate a wound surface mask by interpolating depth information of portions of the depth map outside of the wound boundary; generate wound depth data by calculating a depth difference between the wound surface map and the wound map; calculate a mathematical length of the wound and a mathematical width of the wound from the wound map; and calculate a wound mathematical volume from the mathematical length of the wound, the mathematical width of the wound, and depth data of the wound map.

8. The system of claim 7, wherein the processor is further configured to: assign an orientation axis to the wound map; calculate a standard length of the wound and a standard width of the wound from the wound map and the orientation axis; determine a standard depth of the wound from the wound map; and calculate a standard wound volume from the standard length of the wound, the standard width of the wound, and the standard depth of the wound.

9. The system of claim 1, wherein the series of vibrations comprises two quick vibrations with ascending intensity.

10. The system of claim 1, wherein the processor and touchscreen are configured to: display a visual indicator on the touchscreen in response to the processor determining that the wound is aligned within the camera frame.

11. The system of claim 1, further comprising: a speaker assembly; wherein the processor and speaker assembly are configured to: output an audible alert from the speaker assembly in response to the processor determining that the wound is aligned within the camera frame.

12. The system of claim 1, further comprising: a speaker assembly; wherein the processor and speaker assembly are configured to: output an audible alert from the speaker assembly in response to the processor determining that the wound is not aligned within the camera frame.

13. The system of claim 1, wherein the image capture device comprises: an electro-optical camera; an infrared camera; and a light emitting module.

14. The system of claim 13, wherein the light emitting module is configured to emit multiple light rays according to a pattern.

15. The system of claim 1, wherein the image capture device comprises: an electro-optical camera; and a time-of-flight camera.

16. A non-transitory computer-readable medium comprising executable instructions for performing steps in a method for measuring a wound site, wherein the executable instructions configure a processor: receive a digital image; receive a depth map associated with the digital image; determine a bounding box of the wound site by processing the digital image with a first trained neural network; determine a wound mask by processing the digital image with a second trained neural network; determine a wound boundary from the wound mask; determine whether the wound boundary is aligned within a camera frame; and generate a wound map from the depth map, wound mask, and wound boundary.

17. The non-transitory computer-readable medium of claim 16, wherein the executable instructions further configure the controller to: output a signal to generate a first series of vibrations at a vibration motor in response to the processor determining that the wound is aligned within the camera frame.

18. A method for measuring a wound site, comprising: capturing a digital image of the wound site with an image capture device; capturing a depth map associated with the digital image with the image capture device; determining a bounding box of the wound site by processing the digital image with a first trained neural network; determining a wound mask of the wound site by processing the digital image with a second trained neural network; determining a wound boundary from the wound mask; generating a wound map from the depth map, wound mask, and wound boundary; determining whether the wound boundary is aligned within a camera frame; and providing a series of vibrations at a vibration motor if the wound is aligned within the camera frame.

19. The method of claim 18, wherein the series of vibrations comprises two quick vibrations with ascending intensity.

20. The method of claim 18, further comprising outputting an audible alert from a speaker assembly if the wound boundary is aligned within the camera frame.

Description:
METHOD TO DETECT AND MEASURE A WOUND SITE ON A MOBIUE DEVICE

CROSS-REFERENCE TO REUATED APPUICATIONS

[0001] This application claims the benefit of priority to U.S. Provisional Application No. 63/194,541, filed on May 28, 2021, which is incorporated herein by reference in its entirety.

TECHNICAU FIEUD

[0002] The invention set forth in the appended claims relates generally to tissue treatment systems. More particularly, but without limitation, the present disclosure relates to systems and methods for accomplishing acquisition and processing of wound images, as well as photogrammetry.

BACKGROUND

[0003] A wound is generally defined as a break in the epithelial integrity of the skin. Such an injury, however, may be much deeper, including the dermis, subcutaneous tissue, fascia, muscle, and even bone. Proper wound healing is a highly complex, dynamic, and coordinated series of steps leading to tissue repair. Acute wound healing is a dynamic process involving both resident and migratory cell populations acting in a coordinated manner within the extra-cellular matrix environment to repair the injured tissues. Some wounds fail to heal in this manner (for a variety of reasons) and may be referred to as chronic wounds.

[0004] Following tissue injury, the coordinated healing of a wound will typically involve four overlapping but well-defined phases: hemostasis, inflammation, proliferation, and remodeling. Hemostasis involves the first steps in wound response and repair which are bleeding, coagulation, and platelet and complement activation. Inflammation peaks near the end of the first day. Cell proliferation occurs over the next 7-30 days and involves the time period over which wound area measurements may be of most benefit. During this time, fibroplasia, angiogenesis, re-epithelialization, and extra-cellular matrix synthesis occur. The initial collagen formation in a wound typically peaks in approximately 7 days. The wound re-epithelialization occurs in about 48 hours under optimal conditions, at which time the wound may be completely sealed. A healing wound may have 15% to 20% of full tensile strength at 3 weeks and 60% of full strength at 4 months. After the first month, a degradation and remodeling stage begins, wherein celhilarity and vascularity decrease and tensile strength increases. Formation of a mature scar often requires 6 to 12 months.

[0005] There are various wound parameters that may assist a clinician in determining and tracking healing progress of a wound. For example, wound dimensions, including wound area and volume measurements, may provide a clinician with knowledge as to whether or not a wound is healing and, if the wound is healing, how rapidly the wound is healing. Wound assessment is an important process to properly treating a wound, as improper or incomplete assessment may result in a wide variety of complications. [0006] While wound measurements may provide valuable parameters for helping a clinician assess wound healing progress, the size of the wound may not provide a clinician with a full picture to fully assess whether or how a wound is healing. For example, while the size of a wound may be reduced during treatment, certain parts of a wound may become infected, or wound healing may become stalled, due to infection or comorbidity. A clinician may often-times examine the wound bed for indication of wound healing, such as formation of granulation tissue, early stage epithelial growth, or look for signs and symptoms of infection. Wound tissue includes a wound bed and peri-wound areas and wound edges. Health of a wound may be determined by color of tissue, with certain problems often presenting with distinct colors at the wound. For example, normal granulation tissue typically has a red, shiny textured appearance and bleeds readily, whereas necrotic tissue (i.e., dead tissue) may either be yellow-gray and soft, generally known as “slough” tissue, or hard and blackish- brown in color, generally known as “eschar” tissue. A clinician may observe and monitor these and other wound tissues to determine wound healing progress of the overall wound, as well as specific wound regions.

[0007] Because wound treatment can be costly in both materials and professional care time, a treatment that is based on an accurate assessment of the wound and the wound healing process can be essential.

BRIEF SUMMARY

[0008] New and useful systems, apparatuses, and methods for wound image analysis are set forth in the appended claims. Illustrative embodiments are also provided to enable a person skilled in the art to make and use the claimed subject matter.

[0009] For example, a system for measuring a wound site is presented. The system may include an image capture device, a touchscreen, a vibration motor, and a processor. The image capture device may be configured to capture a digital image and capture a depth map associated with the digital image. The processor may be configured to determine a bounding box of the wound site by processing the digital image with a first trained neural network, determine a wound mask by processing the digital image with a second trained neural network, determine a wound boundary from the wound mask, determine whether the wound boundary is aligned within a camera frame, and generate a wound map from the depth map, wound mask, and wound boundary. The processor and vibration motor may be configured to provide a series of vibrations at the vibration motor in response to the processor determining that the wound is aligned within the camera frame.

[0010] According to exemplary embodiments, the processor may be further configured to scale the digital image to generate a scaled image, pad the scaled image to generate a padded image, determine a plurality of possible wound regions by inputting the padded image into the first trained neural network, and determine the bounding box by applying a non-maximum suppression algorithm to the plurality of possible wound regions. In some examples, the processor may be further configured to select a selected possible wound regions from the plurality of possible wound regions. The selected possible wound region may have a highest objectiveness score within the plurality of possible wound regions. The processor may be further configured to move the selected possible wound region to a set of probable wound regions, calculate an Intersection over Union of the selected possible wound region with each possible wound region in the plurality of possible wound regions, and remove each possible wound region in the plurality of possible wound regions having an Intersection over Union with the selected possible wound region lower than a threshold.

[0011] In some examples, the processor may be further configured to scale the digital image to generate a scaled image, normalize pixel values of the scaled image to generate a normalized image, determine a raw mask by inputting the normalized image into the second trained neural network, and transform the raw mask into a wound mask. In some embodiments, the processor may be further configured to select a pixel value threshold, set values of raw mask pixels greater than the pixel value threshold to 1, and set values of raw mask pixels not greater than the pixel value threshold to 0. In exemplary embodiments, the processor may be further configured to set values of raw mask pixels having a value of 1 to 255 and output the raw mask as the wound mask.

[0012] In other features, the processor may be further configured to generate a wound surface mask by interpolating depth information of portions of the depth map outside of the wound boundary, generate wound depth data by calculating a depth difference between the wound surface map and the wound map, calculate a mathematical length of the wound and a mathematical width of the wound from the wound map, and calculate a wound mathematical volume from the mathematical length of the wound, the mathematical width of the wound, and depth data of the wound map.

[0013] Alternatively, in other example embodiments, the processor may be further configured to assign an orientation axis to the wound map, calculate a standard length of the wound and a standard width of the wound from the wound map and the orientation axis, determine a standard depth of the wound from the wound map, and calculate a standard wound volume from the standard length of the wound, the standard width of the wound, and the standard depth of the wound.

[0014] In some examples, the series of vibrations may include two quick vibrations with ascending intensity. In other features, the processor and touchscreen may be configured to display a visual indicator on the touchscreen in response to the processor determining that the wound is aligned within the camera frame. According to some examples, the system may further include a speaker assembly. The processor and the speaker assembly may be configured to output an audible alert from the speaker assembly in response to the processor determining that the wound is aligned within the camera frame. In some examples, the processor and speaker assembly may be configured to output an audible alert from the speaker assembly in response to the processor determining that the wound is not aligned within the camera frame.

[0015] In other features, the image capture device may include an electro-optical camera, an infrared camera, and a light emitting module. In some examples, the light emitting module may be configured to emit multiple light rays according to a pattern. In some examples, the image capture device may include an electro-optical camera and a time-of-flight camera.

[0016] A non-transitory computer-readable medium including executable instructions for performing steps in a method for measuring a wound site is also described. The executable instructions may configure a processor to receive a digital image, receive a depth map associated with the digital image, determine a bounding box of the wound site by processing the digital image with a first trained neural network, determine a wound mask by processing the digital image with a second trained neural network, determine a wound boundary from the wound mask, determine whether the wound boundary is aligned within a camera frame, and generate a wound map from the depth data, wound mask, and wound boundary. In some examples, the executable instructions may further configure the controller to output a signal to generate a first series of vibrations at a vibration motor in response to the processor determining that the wound is aligned within the camera frame.

[0017] A method for measuring a wound side is also described. The method may include capturing a digital image of the wound site with an image capture device, capturing a depth map associated with the digital image with the image capture device, determining a bounding box of the wound site by processing the digital image with a first trained neural network, determining a wound mask of the wound site by processing the digital image with a second neural network, determining a wound boundary from the wound mask, generating a wound map from the depth map, wound mask, and wound boundary, determining whether the wound boundary is aligned within a camera frame, and providing a series of vibrations at a vibration motor if the wound is aligned within the camera frame. In some examples, the series of vibrations may include two quick vibrations with ascending intensity. In some examples, the method may further include outputting an audible alert from a speaker assembly if the wound boundary is aligned within the camera frame.

[0018] Objectives, advantages, and a preferred mode of making and using the claimed subject matter may be understood best by reference to the accompanying drawings in conjunction with the following detailed description of illustrative embodiments.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] Figure 1 is an illustration of a therapy network in accordance with an exemplary embodiment;

[0020] Figure 2 is a perspective view illustrating additional details that may be associated with some example embodiments of the therapy network of Figure 1;

[0021] Figure 3 is a schematic diagram illustrating some examples of the mobile device of Figures 1 and 2 and the image capture device of Figure 2;

[0022] Figure 4 is a schematic diagram illustrating additional examples of the mobile device of Figures 1 and 2 and the image capture device 202 of Figure 2; [0023] Figure 5A is a schematic diagram illustrating examples of modules which may be stored on the non-transitory computer readable storage medium of Figure 3 and which may be accessed by the processor and other components of the mobile device of Figures 1, 2, and 3;

[0024] Figure 5B is a schematic diagram illustrating examples of modules which may be stored on the non-transitory computer readable storage medium of Figure 4 and which may be accessed by the processor and other components of the mobile device of Figures 1, 2, and 4;

[0025] Figure 6 is a flowchart of an exemplary process for capturing and processing a wound image from the image capture device of Figure 2, Figure 3, and Figure 4;

[0026] Figure 7 is a flowchart of an exemplary process for processing a captured image utilizing machine learning components at step 10b of Figure 6;

[0027] Figure 8 is a flowchart of an exemplary process for determining a bounding box of a wound by processing a digital image with a trained neural network at step 20c of Figure 7;

[0028] Figure 9 is a flowchart of an exemplary process of a non-maximum suppression algorithm which may be applied at step 30d of Figure 8;

[0029] Figure 10 is a flowchart of an exemplary process of determining a wound mask by processing a digital image with a trained neural network at step 20d of Figure 7;

[0030] Figure 11 is a flowchart of an exemplary process of transforming a raw mask into a wound mask at step 50d of Figure 10;

[0031] Figure 12 is a flowchart of an exemplary process of calculating measurement values at step lOd of Figure 6;

[0032] Figure 13 is a flowchart of an exemplary process of calculating standard values at step lOg of Figure 6;

[0033] Figure 14 is a flowchart of an exemplary process of notifying the user of correct image capture device alignment which may be performed at step 10c of Figure 6;

[0034] Figure 15A illustrates examples of notifying the user of correct alignment at step 90d of Figure 14 on the touchscreen 308;

[0035] Figure 15B shows an example of a message 1506 to the user to hold the image capture device still for the image capture interval;

[0036] Figure 15C shows an example of how the user interface may display the wound boundary calculated at step 20e of Figure 7 to the user on the touchscreen;

[0037] Figure 16 is a flowchart of an exemplary process of the mobile device managing the process of the user inputting the head-to-toe vector and calculating standard values at step lOg of Figure 6 and at Figure 13;

[0038] Figure 17A shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0039] Figure 17B shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4; [0040] Figure 17C shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0041] Figure 17D shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0042] Figure 17E shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0043] Figure 17F shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0044] Figure 17G shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0045] Figure 17H shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4;

[0046] Figure 171 shows an example of a user interface adaptation of the process of Figure 16 displayed on examples of the touchscreen of the mobile device of Figures 1-4; and

[0047] Figure 18 is a flowchart of an exemplary process for training a machine learning model for use in the processes of Figures 6, 7, 8, and 10.

DESCRIPTION OF EXAMPLE EMBODIMENTS

[0048] The following description of example embodiments provides information that enables a person skilled in the art to make and use the subject matter set forth in the appended claims, but it may omit certain details already well-known in the art. The following detailed description is, therefore, to be taken as illustrative and not limiting.

[0049] The example embodiments may also be described herein with reference to spatial relationships between various elements or to the spatial orientation of various elements depicted in the attached drawings. In general, such relationships or orientation assume a frame of reference consistent with or relative to a patient in a position to receive treatment. However, as should be recognized by those skilled in the art, this frame of reference is merely a descriptive expedient rather than a strict prescription.

[0050] Figure 1 is a schematic diagram of an example embodiment of a therapy network 100 that can support a wound imaging and diagnostic application in accordance with this specification. The therapy network 100 may include a clinical setting 102, which may include an environment where a patient 104 with a tissue site 106 may be evaluated and/or treated by a clinician 108. The clinician 108 may use a mobile device 110, in conjunction with the wound imaging and diagnostic application, to capture, edit, and analyze images related to the tissue site 106.

[0051] The term “tissue site” in this context broadly refers to a wound, defect, or other treatment target located on or within tissue, including, but not limited to, bone tissue, adipose tissue, muscle tissue, neural tissue, dermal tissue, vascular tissue, connective tissue, cartilage, tendons, or ligaments. A wound may include chronic, acute, traumatic, subacute, and dehisced wounds, partial- thickness bums, ulcers (such as diabetic, pressure, or venous insufficiency ulcers), flaps, and grafts, for example. The term “tissue site” may also refer to areas of any tissue that are not necessarily wounded or defective, but are instead areas in which it may be desirable to add or promote the growth of additional tissue. For example, negative pressure may be applied to a tissue site to grow additional tissue that may be harvested and transplanted.

[0052] The term “clinician” is used herein as meaning any medical professional, user, family member of a patient, or patient who interacts or interfaces with the various aspects of care related to a tissue site.

[0053] A mobile device for the purposes of this application may include any combination of a computer or microprocessor. The computer or microprocessor may be programmed to implement one or more software algorithms for achieving the functionality described in the specification and corresponding figures. The mobile device, such as mobile device 110, may also include a communication device, and may be a smartphone, a tablet computer, or other device that is capable of storing a software application programmed for a specific operating system (e.g., iOS, Android, and Windows). The mobile device 110 may also include an electronic display and a graphical user interface (GUI), for providing visual images and messages to a user, such as a clinician or patient. The mobile device 110 may be configured to communicate with one or more networks 112 of the therapy network 100. In some embodiments, the mobile device 110 may include a cellular modem and may be configured to communicate with the network(s) 112 through a cellular connection. In other embodiments, the mobile device 110 may include a Bluetooth® radio or other wireless radio technology for communicating with the network(s) 112. The mobile device 110 may be configured to transmit data related to the tissue site 106 of the patient 104.

[0054] The therapy network 100 may also include a support center 114 that may be in communication with the mobile device 110 through network(s) 112. For example, the mobile device 110 may be configured to transmit data through network(s) 112 to the support center 114. The support center 114 may support a wound imaging database 116. In some embodiments, the support center 114 may include both a clinical support center 118 and a technical support center 120. The clinical support center 118 may function as a centralized center for clinicians to contact regarding questions they may have related to imaging of specific wounds with which they may be presented. The technical support center 120 may serve as a contact point for solving technical issues with use of the wound imaging and diagnostic application.

[0055] The therapy network 100 may also include other entities that may communicate with clinical settings, mobile devices, and support centers through network(s) 112. For example, the therapy network 100 may include a third party 122. In some embodiments, the third party 122 may be an image-processing vendor. Various image -processing vendors may be included as part of the therapy network 100, to provide expertise and support for wound images that may be particularly unique or challenging to process and analyze. Such image-processing vendors may offer one or more additional software packages that may be used for processing specific aspects of captured wound images. In these embodiments, a representative in the clinical support center 118 may determine that a particular image requires the additional processing expertise offered by a specific image-processing vendor and may route the image file(s) to that vendor. In some embodiments, the wound imaging and diagnostic application may prompt the user, such as the clinician, for routing the image to the third- party vendor, or in some cases, may be configured to automatically route the image to one or more particular image-processing vendors.

[0056] Referring to Figure 2, an exemplary patient environment, such as clinical setting 102, is shown with the patient 104 having a tissue site 106. Mobile device 110 is also shown, with an image capture device 202, which may be utilized to capture an image of the tissue site 106. In some examples, the captured image may include one or more two-dimensional digital images, such as a raster graphic including a dot matrix data structure representing a grid of pixels which may be viewed via a bitmapped display. In some examples, the captured image may include a stream of consecutive still images, such as a video. In some examples, the captured image may also include distance or depth information associated with each pixel of the captured image. The captured image may then be transmitted from the image capture device 202 to the mobile device 110. The image capture device 202 may be a digital camera, including a digital camera with a radiation source, such as an infrared or laser emitter configured to emit a pattern onto the tissue site 106. In some examples, the image capture device 202 may be a digital camera, including a digital camera with a time-of-flight camera. In general, to expedite capturing and working with an image of the tissue site 106, the image capture device 202 may be in the form of a digital camera that is configured to be physically connected to the mobile device 110 and may communicate with the mobile device 110 using a wired connection. In some examples, the image capture device 202 may be configured to be wirelessly connected to the mobile device 110. In some examples, the image capture device 202 may utilize memory device (not shown) that may be transferred between electronic devices. The memory device may include an electronic non-volatile computer memory storage medium capable of being electronically erased and reprogrammed, such as flash memory. The memory device may include any other memory device with which the mobile device 110 may be compatible.

[0057] As previously discussed, the image capture device 202 may be used to capture images which may be incorporated into the wound imaging and diagnostic application. The captured images may then be shared among interested parties, such as the clinician, image processing vendors, and the patient. Wound images captured by the image capture device 302 may be used by the wound imaging and diagnostic application to determine a depth map associated with each captured wound image, determine a bounding box of the wound, determine a wound mask, calculate a wound boundary, generate a wound map, and calculate various characteristics of the wound. As also previously mentioned, the image capture device 202 may include a three-dimensional camera connected to the mobile device 110, which may also be used to capture wound images that may be used by the wound imaging and diagnostic application to automatically determine one or more wound dimensions and upload the dimensions to the proper data fields in the wound imaging application. Additionally, the image capture device 202 may be used to capture images of the tissue site 106 over time, in order for a clinician, a patient, or other interested party to monitor the healing progress of the tissue site 106. Users, such as clinicians, may also have the ability to upload images previously taken, which may be stored in a secure gallery on the mobile device 110. Tissue site images captured by the image capture device 302 may each be stored in an image database, such as wound imaging database 116, associated with the wound imaging and diagnostic application and therapy network 100.

[0058] Figure 3 is a schematic diagram illustrating some examples of the mobile device 110 of Figures 1 and 2 and the image capture device 202 of Figure 2. As previously described, the mobile device 110 may comprise computer or a microprocessor. For example, the mobile device 110 may include a processor 302, a memory 304, such as random-access memory (RAM), and a non-transitory computer readable storage medium 306, such as a hard disk drive (HDD), single-level cell (SLC) NAND flash, multi-level cell (MLC) NAND flash, triple-level cell (TLC) NAND flash, quad-level cell (QLC) NAND flash, NOR flash, or any other suitable storage medium. The mobile device 110 may also include an electronic display, such as touchscreen 308. As previously described, the mobile device 110 may also be configured to communicate with one or more networks 112 of the therapy network 100, and may include a communication device, such as a cellular modem or transceiver 310. In some examples, the mobile device 110 may also include an accelerometer 312, such as a three-axis accelerometer. According to some examples, the processor 302, memory 304, non-transitory computer readable storage medium 306, transceiver 310, and accelerometer 312 may be contained within a housing 314. In some examples, the image capture device 202 and the touchscreen 308 may be coupled to the housing 314. According to illustrative embodiments, the image capture device 202, memory 304, non-transitory computer readable storage medium 306, touchscreen 308, transceiver 310, and accelerometer 312 may each be operatively coupled to the processor 302 and/or each other.

[0059] As illustrated in the example of Figure 3, the image capture device 202 may include a first camera module 316, a second camera module 318, and a light emitting module 320. In some examples, the first camera module 316 may be an electro-optical camera suitable for detecting and converting visible light into an electronic signal, and the second camera module 318 may be an infrared camera suitable for detecting and converting infrared light into an electronic signal. The light emitting module 320 may be configured to emit multiple light rays, such as multiple rays of infrared light, towards an object to be detected. In operation, the light emitting module 320 may emit the multiple light rays according to a pattern, such that a spacing between each ray of the multiple light rays at a given distance is known. Thus, when the multiple light rays are emitted onto the object to be detected, the distance from the light emitting module 320 and various points on the object to be detected may be calculated by measuring the spacing between the points of light from the multiple light rays projected onto the surface of the object to be detected, and a three-dimensional depth map of the surface of the object to be detected may be generated.

[0060] As shown in Figure 3, in some examples, the mobile device 110 may also contain an audio output system, such as a speaker assembly 322. The speaker assembly 322 may contain a digital-to-analog converter (DAC) operatively coupled to the processor 302, an amplifier operatively coupled to the DAC, and/or one or more speakers operatively coupled to the amplifier or directly to the DAC. According to the illustrative embodiments, the mobile device 110 may also contain a haptic feedback system, such as a vibration motor 324. The vibration motor 318 may be operatively coupled to the processor 324.

[0061] Figure 4 is a schematic diagram illustrating additional examples of the mobile device 110 of Figures 1 and 2 and the image capture device 202 of Figure 2. As illustrated in Figure 4, some examples of the image capture device 202 may include a time-of-fhght camera 402. For example, the time-of-fhght camera 402 may be any camera suitable for producing a depth map of the surface of the object to be detected through light detection.

[0062] Figure 5A is a schematic diagram illustrating examples of modules which may be stored on the non-transitory computer readable storage medium 306 of Figure 3 and which may be accessed by the processor 302 and other components of the mobile device 110 of Figures 1, 2, and 3. For example, the non-transitory computer readable storage medium 306 may contain a measurement management module, such as measurement manager 502, a module managing audio and haptic feedback provided to the user, such as an audio and haptic feedback manager 504, a module responsible for the detection of wounds on images from the image capture device 202, such as a wound detector module 506, a module for interfacing with and controlling the image capture device 202, such as camera module 508, a module for storing image and measurement data for the captured images, such as camera data module 510, a module for managing machine learning models, such as machine learning model service module 512, a module for processing images using the machine learning models, such as machine learning processor 514, and/or a calculation engine 516.

[0063] In some examples, the measurement manager 502 may be responsible for controlling an overall wound measurement process. For example, the measurement manager 502 may control the wound scanning process by starting, stopping, and canceling measurements. In some examples, the measurement manager 502 may control the phase of the wound scanning process, for example, initializing the capture session, such as by initializing components of the image capture device 202. Illustrative embodiments of the measurement manager 502 may also selectively introducing delays into the wound scanning process, for example, to allow the user sufficient time to align the device with the wound. In some examples, the measurement manager 502 may hold the current state of the wound scanning process, for example, by storing states and measurements for processed images captured from the image capture device 202. In some embodiments, the measurement manager 502 may communicate with additional modules. For example, the measurement manager 502 may communicate with the wound detector module 506 in order to initialize an image capture session, start and stop a video preview from the image capture device 202, read captured images, depth maps, camera specific data, and wound masks and wound predictions.

[0064] According to illustrative embodiments, the audio and haptic feedback manager 504 may be responsible for providing audio and haptic feedback to the user during specific events in the scanning process. For example, the audio and haptic feedback manager 504 may contain instructions to cause the speaker assembly 322 to play a sequence of sounds, and/or cause the vibration motor 324, to play a sequence of vibrations. For example, at a beginning of a capture event, such as a capture session, the audio and haptic feedback manager 504 may cause the vibration motor 324 to play one quick vibration. In some examples, when a wound is correctly aligned with a camera, such as the image capture device 202, the audio and haptic feedback manager 504 may cause the speaker assembly 322 to play a sound and cause the vibration motor 324 to play two quick vibrations with ascending intensity. During a countdown event, such as while processing a captured image, evaluating wound alignment, and/or calculating wound dimensions, the audio and haptic feedback manager 504 may cause the speaker assembly 322 to play a sound. According to illustrative embodiments, when wound measurement is finished, the audio and haptic feedback manager 504 may cause the speaker assembly 322 to play a sound and the vibration motor 324 to play two quick vibrations with ascending intensity. In some examples, if wound measurement fails, the audio and haptic feedback manager 504 may cause the speaker assembly 322 to play a sound and the vibration motor 324 to play two quick vibrations with descending intensity.

[0065] In some embodiments, the wound detector module 506 may be responsible for detecting wounds on images capture by the image capture device 202. For example, the wound detector module 506 may initialize a capture session, start and stop video preview from the image capture device 202. The wound detector module 506 may also communicate with the machine learning processor 514 to request detection of a wound mask and/or predictions for images obtained from the image capture device 202. In some examples, the wound detector module 506 may communicate with the machine learning model service 512 to request the latest machine learning models from a server. For example, the wound detector module 506 and/or the machine learning model service 512 may communicate with the support center 114 by accessing the network 112 through the transceiver 310. Additionally or alternatively, the wound detector module 506 may create and return objects to the camera data module 510. For example, the wound detector module 506 may return data objects such as captured images, depth maps, camera intrinsic data, and calculated values to the camera data module 510.

[0066] According to illustrative embodiments, the camera data module 510 may initialize capture sessions, start and stop video previews displayed on the touchscreen 308, and return captured images and depth maps from the image capture device 202. [0067] In some examples, the camera data module 510 may store captured images and measurement data for the images. For example, the camera data module 510 may store images captured from the image capture device 202, depth maps captured from the image capture device 202, intrinsic data such as image metadata from the image capture device 202, and wound masks and predictions from the machine learning processor 514. Additionally or alternatively, the camera data module 510 may convert depth maps from half-precision floating-point format (FP16 or float 16) to single-precision floating-point format (FP32 or float32), and may call a calculation engine to calculate wound measurements.

[0068] In some examples, the machine learning model service 512 may be responsible for managing the machine learning models used by the machine learning processor 514. For example, the machine learning model service 512 may access the network 112 through the transceiver 310, and communicate with a server at the support center 114 or third party 122. The machine learning model service 512 may check the server for updated machine learning models, and update machine learning models stored locally on the mobile device 110 as necessary. If updates are necessary, the machine learning model service 512 may compile and validate the downloaded machine learning models. If no new models are available from the server or if the machine learning model service 512 cannot access the server, then the machine learning model service 512 may provide the machine learning processor 514 with locally stored machine learning models.

[0069] According to example embodiments, the machine learning processor 514 may be responsible for processing wound images captured by the image capture device 202. The machine learning processor 514 may detect wounds on the wound images, generate a wound mask corresponding with the detected wounds, and calculate predictions for the detected wounds. For example, the machine learning processor 514 may include a segmentation model for detecting a wound boundary and/or a wound mask from an input image, and an object detection model for detecting wound objects.

[0070] Figure 5B is a schematic diagram illustrating examples of modules which may be stored on the non-transitory computer readable storage medium 306 of Figure 4 and which may be accessed by the processor 302 and other components of the mobile device 110 of Figures 1, 2, and 4. For example, the non-transitory computer readable storage medium 306 may contain a time-of-flight camera management module, such as time-of-flight camera module 518, a machine learning management and execution module, such as machine learning component 520, and a module managing audio and haptic feedback provided to the user, such as an audio and haptic feedback manager 522.

[0071] In some examples, the time-of-flight camera module 518 may be responsible for interfacing with and/or controlling the time-of-flight camera 402. For example, the time-of-flight camera 402 may utilize infrared light to measure the distance between it and objects within its field of view. The time-of-flight camera 402 may return data in Android dense depth image format (DEPTH16), with each pixel containing range or distance information and a confidence measure. Each pixel may be a 16-bit sample which represents a depth ranging measurement from a depth camera. The 16-bit sample may also include a confidence value representing a confidence of the sample.

[0072] According to some embodiments, the machine learning component 520 may include an object detection model for detecting wounds present in wound images, a post-processing model which takes object detection model outputs as an input and returns detected wound objects, and a segmentation model for detecting wound masks from an input image.

[0073] According to illustrative embodiments, the audio and haptic feedback manager 522 may cause the speaker assembly 322 to play a start sound at the beginning of a scanning process, such as a capture event, a countdown sound that plays every other second of the scanning process, an end sound that plays at the end of the scanning process, and an error sound that plays when an error occurs during the scanning process. Additionally or alternatively, the audio and haptic feedback manager module 522 may cause the vibration motor 324 to play two sequences of vibrations with increasing intensity at the beginning of the scanning process and/or at the end of the scanning process, and two sequences of vibrations with decreasing intensity at the end of the scanning process.

[0074] Figure 6 is a flowchart of an exemplary process 10 for capturing and processing a wound image from the image capture device 202 of Figure 2, Figure 3, and/or Figure 4. At step 10a, a wound image capture session may be initialized. The wound detector module 506, camera module 508, and/or time-of-flight camera module 518 may obtain a continuous stream of wound images from the camera module 202 with associated depth maps or depth information. In some examples, the wound images may be individually captured still images. In some examples, the wound images may be still frame from a video stream. The captured wound images may be processed at step 10b with machine learning components, such as the machine learning processor 514 or machine learning component 520. The machine learning processor 514 or machine learning component 520 may detect wounds such as the tissue site 106 within the wound images, and calculate bounding boxes, wound masks, and wound boundaries for each wound of each wound image. In some examples, wounds may be detected and bounding boxes may be drawn utilizing an image or wound detection model. In some examples, the wound masks and wound boundaries may be generated using an image or wound segmentation model.

[0075] At step 10c, wound alignment may be calculated from the wound predictions, such as bounding boxes, wound masks, and/or wound boundaries. For example, the wound detector module 50, machine learning processor 514, and/or the machine learning component 520 may use the wound predictions to evaluate whether the wound is correctly aligned in the current wound images received from the image capture device 202. For example, if the wound is outside of the wound image, the touchscreen 308 may indicate that the wound is not correctly aligned. For example, an edge of a camera viewfinder reticle may be drawn as semi-transparent to indicate misalignment of a wound. In some examples, the audio and haptic feedback module 504 and/or 522 may alert the user of a misalignment. If all wounds are inside of the wound image, then a green checkmark may be displayed on the touchscreen 308, and/or the audio and haptic feedback module 504 and/or 522 may alert the user of correct alignment. After the wound is correctly aligned within the image capture device 202, the measurement phase may begin at step lOd.

[0076] At step lOd, wound images may be captured for a duration of time. For example, a single wound image may be captured and processed. In some embodiments, a plurality of wound images may be continuously captured and processed over the duration of time, for example, over an eight second interval. For each wound image, the associated wound mask, depth mask, camera specific data, and other wound predictions may be sent to the calculation engine 516. For each wound image, the calculation engine 516 may calculate measurement values, such as a mathematical length, mathematical width, depth, calculated area, geometric area, calculated volume, geometric volume, two normalized points for a length line, two normalized points for a width line, two normalized points for a depth line, and normalized points for a wound outline. In some examples, normalized points may be two-dimensional points with an x-component normalized to be in a range of 0 to 1, and a y- component normalized to be in a range of 0 to 1. For each wound image, the calculated measurement values may be stored in the memory 304 or on the non-transitory computer readable storage medium 306. After the period of time has elapsed, such as the eight second interval, the measurement phase may be ended at step lOe.

[0077] After the measurement phase is ended at step lOe, the best calculated measurement values may be selected at step lOf. If a single set of measurement values was captured for a single wound image at step lOd, the single set of measurement values may be selected by the calculation engine 516. If a plurality of measurement values is captured for a plurality of wound images, then the most accurate set of measurement values may be selected by the calculation engine 516. For example, median values and a standard deviation may be calculated for the entire set of the plurality of measurement values, such as length, width, and depth. For each measurement value of the set of the plurality of measurement values, an absolute difference may be calculated between that measurement value and the calculated median values. The measurement values with the lowest sum of absolute difference may be selected from the set of the plurality of measurement values, and the remaining measurement values may be discarded.

[0078] At step lOg, after the user enters a head-to-toe vector, standard wound dimensions may be calculated. For example, the head-to-toe vector may be a vector indicative of a direction from the head of the patient 104 to the toe of the patient 104. The wound standard length may be defined as the length of the wound at the longest point of the wound along the head-to-toe vector. The wound standard width may be defined as the widest point of the wound along a vector perpendicular to the head-to-toe vector. The wound standard depth may be defined as the deepest point of the wound. At step lOh, the wound measurements calculated at steps lOf and lOg may be sent via network 112 to a server.

[0079] Figure 7 is a flowchart of an exemplary process 20 for processing a captured image utilizing machine learning components at step 10b of Figure 6. For example, after wound images and depth maps are captured at step 10a of process 10, the machine learning processor 514 and/or the machine learning component 520 may obtain a wound image, such as a digital image, from the image capture device 202 at step 20a. At step 20b, the machine learning processor 514 and/or the machine learning component 520 may obtain a depth map associated with the digital image. At step 20c, the machine learning processor 514 and/or the machine learning component 520 may apply a trained machine learning model, such as a wound detection model, in order to find all regions within the digital image where possible wounds are located. For example, the machine learning processor 514 and/or the machine learning component 520 may apply the trained wound detection model to provide a bounding box around each wound present on the digital image. At step 20d, the machine learning processor 514 and/or the machine learning component 520 may apply a trained machine learning model in order to determine a wound mask. For example, the wound mask may be a binary mask of the same size as the digital image. In some examples, if the value of a pixel of the binary mask is 1, the corresponding pixel in the digital image represents a wound. According to some embodiments, if the value of a pixel of the binary mask is 0, the corresponding pixel in the digital image does not represent a wound. At step 20e, after the wound mask has been generated, a wound boundary can be calculated based on the wound mask. For example, the wound boundary may be where the wound mask transitions from regions where the value of the pixels of the binary mask is 1 to regions where the value of the pixels is 0. At step 20f, a wound map may be generated by removing portions of the depth map outside of the wound boundary. In some examples, the wound map may be generated by removing portions of the depth map corresponding to regions of the wound mask where the value of the pixels is 0.

[0080] Figure 8 is a flowchart of an exemplary process 30 for determining a bounding box of a wound by processing a digital image with a trained neural network at step 20c of Figure 7. For example, after the machine learning processor 514 and/or the machine learning component 520 obtains the digital image from the image capture device 202 at step 20a and obtains the depth map associated with the digital image at step 20b, the digital image may be scaled to generate a scaled image at step 30a. For example, the scaled image may be a lower resolution version of the digital image. The scaled image should be of sufficient resolution for the trained machine learning model to distinguish between regions on the scaled image containing a wound and regions on the scaled image not containing a wound. At step 30b, the scaled image may be padded to generate a padded image. For example, if the width of the scaled image is not equal to the height of the scaled image, pixels may be added at the boundaries of the scaled image until the scaled image is a square. For example, after the digital image is scaled at step 30a and padded at step 30b, the size of the padded image may be a square having dimensions of 320 pixels by 320 pixels. In some examples, the width of the scaled image and the length of the scaled image should be in multiples of 32 pixels. For example, the size of the scaled image may be 256 pixels by 256 pixels, 320 pixels by 320 pixels, or 512 pixels by 512 pixels.

[0081] At step 30c, a trained machine learning model, such as a wound detection model, may be used to predict possible wound regions within the padded image. For example, a deep neural network, such as a convolutional neural network (CNN) may be used to analyze the padded image. In some examples, each possible wound region predicted by the trained machine learning model may be characterized by five values. For example, each possible wound region may include an x; value such as a relative X-coordinate of the upper-left comer of the predicted region, a Vi value such as a relative Y-coordinate of the upper-left comer of the predicted region, an X2 value such as a relative X- coordinate of the lower-right comer of the predicted region, a yi value such as a relative Y -coordinate of the lower-right comer of the predicted region, and a confidence score representing a confidence level of the model that a wound is actually present in the predicted region.

[0082] At step 30d, the predicted regions generated at step 30c may be filtered. For example, at step 30d, a non-maximum suppression algorithm may be applied to the plurality of possible wound regions to determine a single predicted wound region for each actual wound region. At step 30e, test metrics may be calculated. For example, test metrics may be used for measuring the performance of the trained machine learning model and informing users of the readiness of a machine learning model. In some embodiments, tme positives (TP), false positives (FP), and false negatives (FN) can be calculated and summed for every image in a test dataset. For example, if the predicted region within an image in the test dataset has a Jaccard index or Intersection over Union (IoU) with a real region within the image in the test dataset higher than a threshold, then the predicted region and the real region should be paired, and the predicted region should be considered a true positive. This process may be repeated for all of the predicted regions. If the predicted region has an IoU with the real region lower than the threshold, then the predicted region should be considered a false positive. Real regions which are not detected may be considered false negatives. After the true positives, false positives, and false negatives have been calculated for every image, test metrics such as precision, recall, and the F- score (/' / ) may be calculated according to equations 1-3 below:

TP

Precision = (1)

TP + FP

TP

Recall = (2)

TP + FN [0083] Generally, the machine learning models with higher precision, recall, and I A values may be considered better trained and/or more ready.

[0084] Figure 9 is a flowchart of an exemplary process 40 of a non-maximum suppression algorithm which may be applied at step 30d of Figure 8. For example, after a plurality of possible wound regions have been determined by inputting the padded image into the trained machine learning model at step 30c, a possible wound region A may be selected form the set of possible wound regions B at step 40a. The selected possible wound region A should be the region within the set of possible wound regions B having the highest objectiveness score. For example, the selected possible wound region A may be the region within the set of possible wound regions B having the highest confidence score. At step 40b, the selected possible wound region A may be moved to a set of probable wound regions C. At step 40c, an IoU of the selected possible wound region A with each other possible wound region from the set of possible wound regions B may be calculated. At step 40d, all possible wound regions having an IoU lower than a threshold N may be removed from the set of possible wound regions B. At step 40e, if set B is not empty, then the steps of 40a through 40d may be repeated. If at step 40e, set B is empty, then the members of the set of probable wound regions C may be returned as the bounding boxes(s) of the wound(s).

[0085] Figure 10 is a flowchart of an exemplary process 50 of determining a wound mask by processing a digital image with a trained neural network at step 20d of Figure 7. According to illustrative embodiments, after a bounding box has been determined at step 20c of Figure 7, the digital image obtained at step 20a of Figure 7 may be scaled to generate a scaled image at step 50a. For example, the digital image may be scaled down from its original size into a lower resolution. In some embodiments, the scaled image may have a size of 256 pixels by 256 pixels. In some examples, the width and the height of the scaled image should be in multiples of 32. For example, the scaled image may have a size of 320 pixels by 320 pixels, or 512 pixels by 512 pixels. At step 50b, the pixel values of the scaled image may be normalized to generate a normalized image. For example, the value of each pixel on the normalized image may be in a range of 0 to 1. In some examples, the value of each pixel on the normalized image may represent an intensity of the corresponding pixel on the scaled image. In some examples, the normalized image may represent a grayscale image of an RGB scaled image. At step 50c, the normalized image may be input into a trained machine learning model, such as a wound segmentation model, to determine a raw mask. For example, a CNN may be applied to analyze the normalized image. The trained machine learning model will generate a raw mask of the same size as the preprocessed image. In some examples, the value of each pixel of the raw mask may be a floating-point number in a range of 0 to 1. In some examples, the higher the value of a pixel of the raw mask is, the more likely it may be that the pixel is a part of a wound.

[0086] At step 50d, the raw mask may be transformed into a wound mask. For example, the wound mask may be a binary mask of the same size at the digital image obtained at step 20a of Figure 7. If the value of a pixel of the binary mask is 1, the corresponding pixel of the digital image is likely a part of a wound. If the value of a pixel of the binary mask is 0, the corresponding pixel of the digital image is not likely to be part of a wound. After step 50d, a wound boundary may be calculated, for example, as previously described in step 20e of Figure 7. After wound boundaries are calculated, test metrics may be calculated at step 50e. At step 50e, true positives, false positives, and false negatives may be calculated for each wound mask. For example, if a calculated wound boundary has an IoU with a real wound boundary higher than a threshold, then the calculated wound boundary and the real wound boundary may be paired, and the calculated wound boundary may be considered to be a true positive. If the calculated wound boundary has an IoU with a real wound boundary lower than the threshold, then the calculated wound boundary may be considered a false positive. If a real wound boundary does not intersect with a calculated wound boundary, than a false negative should be recorded. Precision, recall, and Fi may be calculated according to previously described equations (1) through (3). As previously discussed, the machine learning models with higher precision, recall, and F 1 values may be considered better trained and/or more ready.

[0087] Figure 11 is a flowchart of an exemplary process 60 of transforming a raw mask into a wound mask at step 50d of Figure 10. For example, after a raw mask has been determined by inputting the normalized image into the machine learning model at step 50c of Figure 10, a pixel value T may be selected at step 60a. According to illustrative embodiments, a pixel value T of 0.5 may be selected. At step 60b, if the value of the pixel of the raw mask is greater than T, then the pixel value may be set to 1 at step 60c, and the mask value may be set to 255 at step 60d. At step 60b, if the value of the pixel of the raw mask is not greater than T, then the pixel value may be set to 0 at step 60e, and the mask value may be set to 0 at step 60f.

[0088] Figure 12 is a flowchart of an exemplary process 70 of calculating measurement values at step lOd of Figure 6. For example, at step 70a, a wound surface map may be generated by interpolating depth information of portion of the depth map outside of the wound boundary. The boundary of the wound surface map may be the wound boundary calculated form the wound mask at step 20e of Figure 7. The depth of the wound surface map contained within the boundary of the wound surface map may be interpolated from the depth information of the depth map outside of the wound boundary. At step 70b, wound depth data may be generated by calculating a depth difference between the wound surface map and the wound map. At step 70c, the mathematical length L m and mathematical width W m of the wound may be calculated using the wound map. At step 70d, a wound mathematical volume may be calculated using the mathematical length L m of the wound, mathematical width W m of the wound, and the wound depth data.

[0089] Figure 13 is a flowchart of an exemplary process 80 of calculating standard values at step lOg of Figure 6. For example, at step 80a, an orientation axis may be assigned to the digital image, wound mask, and/or wound map. The orientation axis may be along the head-to-toe vector entered by the user. At step 80b, a standard length L s and standard width W s may be calculated for the wound. As previously described, the wound standard length may be defined as the length of the wound at the longest point of the wound along the orientation axis, and the wound standard width may be defined as the widest point of the wound along a vector perpendicular to the orientation axis. At step 80c, the wound standard volume may be calculated using the wound standard depth, which may be defined as the deepest point of the wound.

[0090] Figure 14 is a flowchart of an exemplary process 90 of notifying the user of correct image capture device 202 alignment which may be performed at step 10c of Figure 6. In some examples, the image capture device 202 may be oriented in the same direction as the touchscreen 308. For example, the image capture device 202 may be the front-facing camera assembly of a smartphone. In use, the image capture device 202 may be pointed in the same direction as the touchscreen 308, for example, towards the wound. Process 90 provides a method for the user to align the image capture device 202 correctly with the wound without relying on a preview on the touchscreen 308. For example, at step 90a, the wound boundary may be determined. In some examples, the wound boundary calculated at step 20e of Figure 7 may be used as the wound boundary at step 90a. At step 90b, the processor 302 determine whether the wound boundary is aligned with the camera frame of the image capture device 202. If at step 90b, the processor 302 determines that the wound boundary is not aligned within the camera frame, the mobile device 110 may notify the user to reposition the image capture device 202. For example, the processor 302 may send instructions to the speaker assembly 322 to audibly guide the user to reposition the mobile device 110, and/or the processor 302 may send instructions to the vibration motor 324 to vibrate when the wound is correctly aligned or when the wound is not correctly aligned. After step 90c, steps 90a and 90b are repeated until the processor 302 determines that the wound boundary is aligned with the camera frame at step 90b, upon which the mobile device 110 may notify the user that the image capture device 202 is correctly aligned, step 90d. For example, the processor 302 may send instructions to the speaker assembly 322 to play an audible sound indicating correct alignment, and/or the processor 302 may send instructions to the vibration motor 324 to notify the user of correct alignment through a series of vibrations.

[0091] Referring collectively to Figure 15, example user interface adaptations displayed on examples of the touchscreen 308 the mobile device 110 of Figures 1-4 are shown. Figure 15A illustrates examples of notifying the user of correct alignment at step 90d of Figure 14 on the touchscreen 308. As shown in the upper right comer of the camera viewfinder frame 1502, a check mark 1504 may be shown when the image capture device 202 is correctly aligned with the wound, such as tissue site 106, at step 90d. Figure 15B shows an example of a message 1506 to the user to hold the image capture device 202 still for the image capture interval. For example, if the measurement phase at step lOd of Figure 6 includes continuously capturing wound images for an eight second interval, the user interface may display a message 1506 to “Hold still for 8 seconds” on the touchscreen 308, as shown in Figure 15B. Figure 15C shows an example of how the user interface may display the wound boundary calculated at step 20e of Figure 7 to the user on the touchscreen 308. As shown in Figure 15C, the calculated wound boundary may be displayed as an outline 1508 overlaid on the wound.

[0092] Figure 16 is a flowchart of an exemplary process 92 of the mobile device 110 managing the process of the user inputting the head-to-toe vector and calculating standard values at step lOg of Figure 6 and at Figure 13. Referring collectively to Figure 17, example user interface adaptations of the process 92 of Figure 16 displayed on examples of the touchscreen 308 of the mobile device 110 of Figure 1-4 are shown. For example, after step lOf of Figure 6, an outline 1508 of the wound such as tissue site 106 may be shown on the touchscreen 308, along with a button 1702 for the user to continue. For example, as illustrated in Figure 17A, a “Next” button 1702 may be shown in the upper right comer of the user interface. Upon the user selecting the “Next” button 1702, the processor 302 may determine whether it is a first time use scenario for the user at step 92a. If the processor 302 determines that it is a first time user scenario for the user at step 92a, the processor 302 may cause the touchscreen 308 to display a prompt 1704 to the user for the user to identify the position of the body during the wound image capture. For example, as illustrated in Figure 17B, the touchscreen 308 may display a prompt 1704 asking the user to select whether the user’s body was in a “Laying Down,” “Standing,” “Sitting” or on the “Back” position while the image capture device 302 was capturing the wound image. As shown in Figure 17B, the user interface may include a button 1706 allowing the user to “Continue” after the user identifies the position of the body during the digital image capture at step 92b. After the user selects the “Continue” button 1706 in Figure 17B, the process 92 may proceed to step 92c.

[0093] If at step 92b, the processor 302 determines that it is not a first time use case for the user, then the process 92 may proceed to step 92c. At step 92c, the processor 302 may cause the touchscreen 308 to display an alignment overlay. As shown in Figure 17C, the user interface may display a screen explaining the “Clock Method” to the user. For example, as shown in Figure 17C, the user interface may display text and graphics 1708 to explain to the user that according to the “Clock Method,” the patient’s head is located at the 12 o’clock position while the patient’s feet are located at the 6 o’clock position. After the user selects the “Continue” button 1710 shown at the bottom of Figure 17C, the processor 302 may display additional instructions 1712 on the touchscreen 308. For example, as shown in Figure 17D, the processor 302 may cause the touchscreen 308 to display additional instructions prompting the user to “Rotate the hands of the clock to the position of your head ‘12 o’clock’ and feet ‘6 o’clock.’” Figure 17E shows an example of the alignment overlay 1704 displayed on the digital wound image received from the image capture device 202. At step 92d, the process 92, the user may rotate the alignment overlay 1714 shown in Figure 17E until the correct orientation is achieved. The head-to-toe vector used by the process 10 at step lOg may be indicated by the line 1716 pointing to the 12 and 6 on the alignment overlay shown in Figure 17E. At step 92e, the wound standard length may be determined as previously described with respect to step lOg of Figure 6 and step 80b of Figure 13. At step 92f, the wound standard width may be determined as previously described with respect to step lOg of Figure 6 and step 80b of Figure 13. At step 92g, the wound standard volume 92g may be determined as previously described with respect to step lOg of Figure 6 and step 80c of Figure 13.

[0094] Figure 17F shows an example user interface adaptation displaying the best calculated measurement values selected at step lOf of Figure 6, such as the mathematical length, width, and volume calculated at steps 70c and 70d of Figure 12. Figure 17F also shows and the calculated standard values of step lOg of Figure 6, such as the wound standard length, width, and volume calculated at steps 80b and 80c of Figure 13 and steps 92e, 92f, and 92g of Figure 16. As illustrated in Figure 17F, the wound mathematical length may be overlaid on the wound image, for example, by the line 1718 labeled “4.2cm.” The wound mathematical width may be overlaid on the wound image, for example, by the line 1720 labeled “2.4cm.” The wound standard length may be overlaid on the wound image, for example, indicated by the line 1722 labeled “1.3cm.” In some examples, the wound mathematical volume may be shown, for example, in an expandable box 1724 at the bottom of the user interface shown in Figure 17F. For example, the wound mathematical volume may be indicated below the text “Wound Volume” by “12.10cm 3 ” along with the date the measurement was taken, indicated by “Thursday, September 24, 2020.” In some examples, the expandable box 1724 at the bottom of the user interface in Figure 17F may be expanded to display more information. For example, as shown in Figure 17G, the user may use the touchscreen 308 in order to drag or swipe the expandable box 1724 up to additionally display “Measurements,” which may include the mathematical length “Length 4.2cm,” mathematical width “Width 2.4cm,” and depth “Depth 1.2cm.” Additionally, a “Wound Location,” such as the “Right Upper Leg” may be displayed, as well as the “Position during photo capture,” such as “Laying Down, Side, Right.” If the user selects the stylus 1726 in the “Measurements” field, the wound measurements may be manually edited. If the user selects the stylus 1728 in the “Wound Location” field, the user may confirm or edit the location of the wound, Figure 17H. If the user selects the stylus 1730 in the “Position during photo capture” field, the user may confirm or edit the position the patient’s body was in while the wound image was captured, Figure 171. After the user has confirmed or edited the fields shown in Figure 17G, the user may select the “Save” button 1732 to save the measurements and data. In some examples, a miniaturized indicator 1734 may show the head-to-toe vector at any screen of the user interface.

[0095] Figure 18 is a flowchart of an exemplary process 94 for training a machine learning model for use in the processes of Figures 6, 7, 8, and 10. For example, process 94 may be used to train the wound detection model described with respect to Figure 6, 7, and 8, and/or the wound segmentation model described with respect to Figures 6, 7, and 10. At step 94a, the data, such as wound images, may be prepared for training. In some examples, the data may be prepared for training at a server, such as the server of the at the support center 114. The wound images should be annotated. For example, the wound images may be annotated with fields such as a presence of wounds and a number of wounds present. At step 94b, the machine learning model may be trained. For example, the machine learning model may be trained at the server. The annotated wound images may be loaded and used for training. After each training interval, the intermediate results of the training such as values of metrics and loss functions may be saved to a database. The learning curves of models during training may be used to diagnose problems with learning, such as an underfit model or an overfit model, as well as whether the training and validation datasets are suitably representative. In some examples, users may manually review images containing wound masks for training. According to illustrative embodiments, images with low confidence scores may be flagged for manual review. In some examples, wound masks may be manually adjusted during step 94b. Wound masks that have been manually adjusted may be sent back to mobile devices 110, and users may be notified that the wound mask has been manually adjusted. At step 94c, performance metrics can be calculated. For example, performance metrics may be calculated at the server. When the machine learning model has been trained, metrics may be calculated to compare the performance of the trained model against existing ones. For example, metrics similar to the test metrics described with respect to step 30e of Figure 8 and/or step 50e of Figure 10 may be used. At step 94d, the trained machine model may be saved onto the server at the support center 114 for retrieval by the mobile device 110.

[0096] The systems, apparatuses, and methods described herein may provide significant advantages. For example, conventional methods of wound measurement, such as using rulers and cotton swabs, may be inaccurate. Similarly, existing methods of wound measurement using conventional two-dimensional cameras may require positioning an object of known dimensions within the frame for calibration. By utilizing an image capture device 202 capable of generating a depth map in addition to a digital image, the mobile device 110 may eliminate or substantially reduce human error associated with using rulers and cotton swabs or conventional two-dimensional cameras. Additionally, conventional two-dimensional cameras do not have the ability to determine depth. Thus, additional depth measurements or estimations may be required to estimate wound volume given a two-dimensional image. Furthermore, by utilizing a trained machine learning model, the process of imaging and measuring wounds with the mobile device 110 may be highly automated, reducing workload for clinicians and patients and increasing precision and accuracy. Additionally, in examples where the image capture device 202 includes specialized camera attachments, oxygenation of the wound may be detected. By using the image capture device 202 in conjunction with colorimetric, infrared, and/or fluorescent imaging, the presence of specific gram negative or gram negative bacteria may be detected in the wound site. Furthermore, the mobile device 110 may provide the user with a report of wound healing progress, and offer wound dressing recommendations to the user based on collected data. In some examples, the machine learning models may also be trained to classify the recognized wounds into the Red-Yellow-Black classification system based on appearance of the wound bed. Furthermore, the mobile device 110 may automatically provide wound volume data to a fluid instillation system in order to automatically calculate instillation fluid volume. [0097] While shown in a few illustrative embodiments, a person having ordinary skill in the art will recognize that the systems, apparatuses, and methods described herein are susceptible to various changes and modifications that fall within the scope of the appended claims. Moreover, descriptions of various alternatives using terms such as “or” do not require mutual exclusivity unless clearly required by the context, and the indefinite articles “a” or “an” do not limit the subject to a single instance unless clearly required by the context. Components may also be combined or eliminated in various configurations for purposes of sale, manufacture, assembly, or use.

[0098] The appended claims set forth novel and inventive aspects of the subject matter described above, but the claims may also encompass additional subject matter not specifically recited in detail. For example, certain features, elements, or aspects may be omitted from the claims if not necessary to distinguish the novel and inventive features from what is already known to a person having ordinary skill in the art. Features, elements, and aspects described in the context of some embodiments may also be omitted, combined, or replaced by alternative features serving the same, equivalent, or similar purpose without departing from the scope of the invention defined by the appended claims.