Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS, METHODS, AND DEVICES FOR DIAGNOSTIC TESTING AND SECURE AND VERIFIED EXCHANGE OF ELECTRONIC MEDICAL RECORDS
Document Type and Number:
WIPO Patent Application WO/2024/073764
Kind Code:
A2
Abstract:
A method can include receiving, from a user device of a user, a request for a remote diagnostic test session. A method can include initiate the remote diagnostic test session, the remote diagnostic test session comprising a video conference session. A method can include receive monitored video frame data from the user device, the monitored video frame data comprising video of the user performing a step of a remote diagnostic test procedure. A method can include detect in the monitored video frame data a diagnostic test tool. A method can include determine a bounding box including at least a portion of the diagnostic test tool. A method can include track a movement of the diagnostic test tool. A method can include determine an insertion depth of the diagnostic test tool. A method can include determine completion of a test action by the user.

Inventors:
FERRO JR MICHAEL W (US)
HEISING JAMES THOMAS (US)
KRAMER NICHOLAS ATKINSON (US)
NIENSTEDT ZACHARY CARL (US)
Application Number:
PCT/US2023/075678
Publication Date:
April 04, 2024
Filing Date:
October 02, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EMED LABS LLC (US)
FERRO JR MICHAEL W (US)
International Classes:
H04L51/00; G16H50/20
Attorney, Agent or Firm:
JASON FRANCIS (US)
Download PDF:
Claims:
CLAIMS

WHAT IS CLAIMED IS:

1 . A computing system for performing a remote diagnostic test comprising: a processor; and a non-volatile memory having instructions embodied therein that, when executed by the processor, cause the computing system to execute a method comprising: receiving, from a user device of a user, a request for a remote diagnostic test session; initiating the remote diagnostic test session, the remote diagnostic test session comprising a video conference session; receiving monitored video frame data from the user device, the monitored video frame data comprising video of the user performing a step of a remote diagnostic test procedure; detecting in the monitored video frame data a diagnostic test tool; determining a bounding box including at least a portion of the diagnostic test tool; tracking a movement of the diagnostic test tool; determining an insertion depth of the diagnostic test tool; and determining completion of a test action by the user.

2. The computing system of Claim 1 , wherein the method executed by the computing system further comprises determining a confidence score, the confidence score indicating a confidence level that the user completed the test action successfully.

3. The computing system of Claim 1 , wherein detecting the diagnostic test tool comprises detecting a grip location of a finger of the user on the diagnostic test tool.

. The computing system of Claim 1 , wherein detecting the diagnostic test tool comprises extracting an identifier of the diagnostic test tool from the monitored video frame data. . The computing system of Claim 1 , wherein tracking the movement of the diagnostic test tool comprises detecting a contour. . The computing system of Claim 1 , wherein determining the insertion depth of the diagnostic test tool comprises determining a vector between a hand of the user and a cavity of the user. . The computing system of Claim 1 , wherein determining the insertion depth of the diagnostic test tool comprises detecting a fiducial of the diagnostic test tool. . The computing system of Claim 1 , wherein the method executed by the computing system further comprises: receiving a request from the user to share a test result with a third party; determining a unique identifier for user; and transmitting the test result and the unique identifier to the third party, wherein the unique identifier is configured to be compared by the third party to a second unique identifier generated by the third party. . The computing system of Claim 8, wherein the unique identifier is based on an email address of the user. 0. The computing system of Claim 8, wherein the unique identifier comprises a hash of a personal information of the user. 1 . A method for performing a remote diagnostic test comprising: receiving, from a user device of a user, a request for a remote diagnostic test session; initiating the remote diagnostic test session, the remote diagnostic test session comprising a video conference session; receiving monitored video frame data from the user device, the monitored video frame data comprising video of the user performing a step of a remote diagnostic test procedure; detecting in the monitored video frame data a diagnostic test tool; determining a bounding box including at least a portion of the diagnostic test tool; tracking a movement of the diagnostic test tool; determining an insertion depth of the diagnostic test tool; and determining completion of a test action by the user. 2. The method of Claim 11 , further comprising determining a confidence score, the confidence score indicating a confidence level that the user completed the test action successfully. 3. The method of Claim 11 , wherein detecting the diagnostic test tool comprises detecting a grip location of a finger of the user on the diagnostic test tool. 4. The method of Claim 11 , wherein detecting the diagnostic test tool comprises extracting an identifier of the diagnostic test tool from the monitored video frame data. 5. The method of Claim 11 , wherein tracking the movement of the diagnostic test tool comprises detecting a contour. 6. The method of Claim 11 , wherein determining the insertion depth of the diagnostic test tool comprises determining a vector between a hand of the user and a cavity of the user. 7. The method of Claim 11 , wherein determining the insertion depth of the diagnostic test tool comprises detecting a fiducial of the diagnostic test tool.

8. The method of Claim 11 , further comprising: receiving a request from the user to share a test result with a third party; determining a unique identifier for user; and transmitting the test result and the unique identifier to the third party, wherein the unique identifier is configured to be compared by the third party to a second unique identifier generated by the third party. 9. The method of Claim 18, wherein the unique identifier is based on an email address of the user. 0. The method of Claim 18, wherein the unique identifier comprises a hash of a personal information of the user.

Description:
SYSTEMS, METHODS, AND DEVICES FOR DIAGNOSTIC TESTING AND SECURE AND VERIFIED EXCHANGE OF ELECTRONIC MEDICAL RECORDS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Application No. 63/377,869, filed September 30, 2022, titled “DETERMINISTIC SYSTEMS METHODS AND DEVICES FOR SECURE AND VERIFIED EXCHANGE OF ELECTRONIC MEDICAL RECORDS,” U.S. Provisional Application No. 63/381 ,440, filed October 28, 2022, titled “SYSTEMS METHODS AND DEVICES FOR MULTI-TEST KITS,” U.S. Provisional Application No. 63/380,009, filed October 18, 2022, titled” SYSTEMS METHODS AND DEVICES FOR IMPROVED TELEHEALTH UTI TESTING,” U.S. Provisional Application No. 63/380,013, filed October 18, 2022, titled “SYSTEMS, METHODS, AND DEVICES FOR IMPROVED TELEHEALTH STREPTOCOCCAL PHARYNGITIS TESTING,” U.S. Provisional Application No. 63/380,020, filed October 18, 2022, titled “SYSTEMS, METHODS, AND DEVICES FOR IMPROVED TELEHEALTH INFLUENZA TESTING,” and U.S. Provisional Application No. 63/378,913, filed October 10, 2023, titled “SYSTEMS, METHODS, AND DEVICES FOR Al- ENABLED DIAGNOSTIC TEST TRACKING AND DETECTION,” the entire contents of each of which are hereby incorporated by reference for all purposes and as if set forth fully herein.

TECHNICAL FIELD

[0002] This application relates to remote medical testing and exchange of medical information.

BACKGROUND

[0003] The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Thus, unless otherwise indicated, it should not be assumed that any of the material described in this section qualifies as prior art merely by virtue of its inclusion in this section. [0004] Individuals often utilize various medical services such as telehealth, doctor’s offices, laboratories, hospitals, imaging centers, and so forth. When utilizing telehealth services, individuals can encounter difficulties performing certain tasks, leading to errors, wasted time, and/or wasted test materials. In some cases, an individual may attempt to obtain a false test result. Accordingly, there is a need is a need for improved approaches to providing telehealth services.

[0005] It can be significant to exchange information between medical services or between a medical service and another entity such as an employer, government agency, regulatory body, online testing or telehealth platform, and so forth.

[0006] There can be significant challenges with existing approaches to sharing data. Accordingly, there is a need for improved systems and methods for the secure and verified exchange of electronic medical records.

SUMMARY

[0007] For purposes of this summary, certain aspects, advantages, and novel features are described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment. Thus, for example, those skilled in the art will recognize the disclosures herein may be embodied or carried out in a manner that achieves one or more advantages taught herein without necessarily achieving other advantages as may be taught or suggested herein.

[0008] In some aspects, the techniques described herein relate to a computing system for performing a remote diagnostic test including: a processor; and a non-volatile memory having instructions embodied therein that, when executed by the processor, cause the computing system to execute a method including: receiving, from a user device of a user, a request for a remote diagnostic test session; initiating the remote diagnostic test session, the remote diagnostic test session including a video conference session; receiving monitored video frame data from the user device, the monitored video frame data including video of the user performing a step of a remote diagnostic test procedure; detecting in the monitored video frame data a diagnostic test tool; determining a bounding box including at least a portion of the diagnostic test tool; tracking a movement of the diagnostic test tool; determining an insertion depth of the diagnostic test tool; and determining completion of a test action by the user.

[0009] In some aspects, the techniques described herein relate to a computing system, wherein the method executed by the computing system further includes determining a confidence score, the confidence score indicating a confidence level that the user completed the test action successfully.

[0010] In some aspects, the techniques described herein relate to a computing system, wherein detecting the diagnostic test tool includes detecting a grip location of a finger of the user on the diagnostic test tool.

[0011] In some aspects, the techniques described herein relate to a computing system, wherein detecting the diagnostic test tool includes extracting an identifier of the diagnostic test tool from the monitored video frame data.

[0012] In some aspects, the techniques described herein relate to a computing system, wherein tracking the movement of the diagnostic test tool includes detecting a contour.

[0013] In some aspects, the techniques described herein relate to a computing system, wherein determining the insertion depth of the diagnostic test tool includes determining a vector between a hand of the user and a cavity of the user.

[0014] In some aspects, the techniques described herein relate to a computing system, wherein determining the insertion depth of the diagnostic test tool includes detecting a fiducial of the diagnostic test tool.

[0015] In some aspects, the techniques described herein relate to a computing system, wherein the method executed by the computing system further includes: receiving a request from the user to share a test result with a third party; determining a unique identifier for user; and transmitting the test result and the unique identifier to the third party, wherein the unique identifier is configured to be compared by the third party to a second unique identifier generated by the third party.

[0016] In some aspects, the techniques described herein relate to a computing system, wherein the unique identifier is based on an email address of the user. [0017] In some aspects, the techniques described herein relate to a computing system, wherein the unique identifier includes a hash of a personal information of the user.

[0018] In some aspects, the techniques described herein relate to a method for performing a remote diagnostic test including: receiving, from a user device of a user, a request for a remote diagnostic test session; initiating the remote diagnostic test session, the remote diagnostic test session including a video conference session; receiving monitored video frame data from the user device, the monitored video frame data including video of the user performing a step of a remote diagnostic test procedure; detecting in the monitored video frame data a diagnostic test tool; determining a bounding box including at least a portion of the diagnostic test tool; tracking a movement of the diagnostic test tool; determining an insertion depth of the diagnostic test tool; and determining completion of a test action by the user.

[0019] In some aspects, the techniques described herein relate to a method, further including determining a confidence score, the confidence score indicating a confidence level that the user completed the test action successfully.

[0020] In some aspects, the techniques described herein relate to a method, wherein detecting the diagnostic test tool includes detecting a grip location of a finger of the user on the diagnostic test tool.

[0021] In some aspects, the techniques described herein relate to a method, wherein detecting the diagnostic test tool includes extracting an identifier of the diagnostic test tool from the monitored video frame data.

[0022] In some aspects, the techniques described herein relate to a method, wherein tracking the movement of the diagnostic test tool includes detecting a contour.

[0023] In some aspects, the techniques described herein relate to a method, wherein determining the insertion depth of the diagnostic test tool includes determining a vector between a hand of the user and a cavity of the user.

[0024] In some aspects, the techniques described herein relate to a method, wherein determining the insertion depth of the diagnostic test tool includes detecting a fiducial of the diagnostic test tool. [0025] In some aspects, the techniques described herein relate to a method, further including: receiving a request from the user to share a test result with a third party; determining a unique identifier for user; and transmitting the test result and the unique identifier to the third party, wherein the unique identifier is configured to be compared by the third party to a second unique identifier generated by the third party.

[0026] In some aspects, the techniques described herein relate to a method, wherein the unique identifier is based on an email address of the user.

[0027] In some aspects, the techniques described herein relate to a method, wherein the unique identifier includes a hash of a personal information of the user.

[0028] All of these embodiments are intended to be within the scope of the invention herein disclosed. These and other embodiments will become readily apparent to those skilled in the art from the following detailed description, having reference to the attached figures, the invention not being limited to any particular disclosed embodiment(s).

BRIEF DESCRIPTION OF THE DRAWINGS

[0029] These and other features, aspects, and advantages of the present disclosure are described with reference to drawings of certain embodiments, which are intended to illustrate, but not to limit, the present disclosure. It is to be understood that the attached drawings are for the purpose of illustrated concepts disclosed in the present disclosure and may not be to scale.

[0030] Figures 1-2B are schematic illustrations of various embodiments of test kits, diagnostic test kit containers according to some embodiments.

[0031] Figures 3 and 4 illustrate examples of diagnostic tests in which a test result can be obtained

[0032] Figures 5 and 6 illustrate examples of diagnostic tests or screenings in which no test result may be obtained.

[0033] Figure 7 shows an overall testing and treatment process according to some embodiments. [0034] Figure 8 is a schematic illustration of a testing process according to some embodiments.

[0035] Figure 9 is a schematic illustration of an example process for telehealth urinary tract infection testing according to some embodiments

[0036] Figure 10 is a schematic illustration of example process for telehealth streptococcal pharyngitis testing according to some embodiments.

[0037] Figure 11 is a schematic illustration of a process for telehealth influenza testing according to some embodiments.

[0038] Figures 12A and 12B illustrate examples of tracking a test swab according to some embodiments.

[0039] Figure 13 illustrates an example data sharing process according to some embodiments.

[0040] Figure 14 illustrates an example data sharing process according to some embodiments.

[0041] Figure 15 is a flowchart that illustrates an example identity verification process according to some embodiments.

[0042] FIG. 16 is a block diagram depicting an embodiment of a computer hardware system configured to run software for implementing one or more embodiments disclosed herein.

DETAILED DESCRIPTION

[0043] Although several embodiments, examples, and illustrations are disclosed below, it will be understood by those of ordinary skill in the art that inventions described herein extend beyond the specifically disclosed embodiments, example, and illustrations and includes other uses of inventions obvious modifications and equivalents thereof. Embodiments of the inventions are described with reference to accompanying figures, wherein like numerals refer to the like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner simply because it is being used in conjunction with a detailed description of certain specific embodiments of the inventions. In addition, embodiments of the inventions can comprise several novel features and no single feature is solely responsible for its desirable attributes or is essential to practicing the inventions herein described.

[0044] The use of telehealth to deliver healthcare services has grown consistently over the last several decades and has experienced rapid growth in the last several years. Telehealth can include the distribution of health-related services and information via electronic information and telecommunication technologies. Telehealth can allow for long distance user and health provider contact, care, advice, reminders, education, intervention, monitoring, and admissions. Often, telehealth can involve the use of a user or patient’s personal electronic device such as a smartphone, tablet, laptop, desktop computer, or other type of personal device (referred to herein as a user computing device). For example, a user or patient can interact with a remotely located medical care provider using live video, audio, and/or text using through the user computing device.

[0045] Remote or at-home healthcare testing, diagnostics, and screening can solve or alleviate some problems associated with in-person testing. For example, health insurance may not be required, travel to a testing site can be avoided, tests can be completed at a user’s convenience, and costs can be reduced by, for example, reducing the need for physical office space. However, at-home testing introduces various additional logistical and technical issues, such as guaranteeing timely test delivery to a user’s home, providing test delivery from a user to an appropriate entity, ensuring test verification and integrity, providing test result reporting to appropriate entities and medical providers, guiding users through unfamiliar processes such as sample collection and/or processing, and connecting users with medical providers, who are sometimes needed to provide guidance and/or oversight of the testing procedures remotely.

Multi-Test Kits

[0046] Figures 1-2B show embodiments of a multi-test kit. In some embodiments, the multi-test kit can include a plurality of test boxes. Each of the plurality of test boxes can include a diagnostic test. The diagnostic test can be a test for COVID-19, influenza, streptococcal pharyngitis, a urinary tract infection, Lyme disease, illicit drugs, or another diagnostic test. In some embodiments, each of the plurality of test boxes can include a diagnostic test for a different condition. In some embodiments, one or more of the plurality of test boxes can include a diagnostic test for a same condition. In some embodiments, the diagnostic test can include an at-home diagnostic test. In some embodiments, the diagnostic test can include a nasal swab test, a urine test, throat swab test, a saliva test, and/or any other type of diagnostic test.

[0047] As shown in Figure 1 , in some embodiments, a user device 102 (e.g., a smartphone, tablet, laptop, desktop, etc.) can be used to take a diagnostic test included in a multi-test kit 104. The multi-test kit 104 can include a plurality of tests.

[0048] Figures 2A-2B illustrate an example of a multi-test kit according to some embodiments. As shown in Figures 2A-2B, a multi-test kit 200 can include a plurality of test kits 202a-202g. It will be appreciated that the number of test kits can vary and can be more or less than shown in Figure 2B. In some embodiments, each of the test kits 202a-202g can be for a different diagnostic test, screening, etc. In some embodiments, one or more of the test kits 202a-202g can be the same as another one of the test kits 202a-202g. For example, there may be two, three, four, or more tests of the same type (e.g., influenza tests, COVID- 19 tests, urinary tract infection (UTI) tests, streptococcal pharyngitis (strep) tests, drug tests, sexual transmitted infection (STI) tests, etc.).

[0049] Figures 3 and 4 illustrate examples of diagnostic tests in which a test result can be obtained. For example, as shown in Figures 3 and 4, a test result can be obtained for a COVID-19 test or a UTI test. Other types of tests may also lead to a test result. In some cases, a user may be screened for a particular illness, but a test result may not be obtained. For example, as shown in Figures 5 and 6, in some cases, an at-home diagnostic test may not be available, and the user can be screened using information such as temperature, images, audio, etc. For example, a user may be screened for influenza or strep but may not perform a diagnostic test to determine if the user has influenza or strep.

[0050] Figure 7 shows an overall testing and treatment process according to some embodiments. A test kit 700 can include various test components (e.g., a diagnostic test, health equipment/sensors, a tongue depressor, a thermometer, etc.) The user can use a telehealth service to perform the diagnostic test or a screening to obtain certified test results, a lab report, and/or screening results. If warranted, the user can receive a treatment plan and/or medication. In some cases, a user may receive treatment and/or medication using a home delivery service. In some cases, a user may travel to a pharmacy to obtain treatment and/or medication.

[0051] In some embodiments, the multi-test kit and/or the diagnostic tests can be purchased or obtained by a user prior to the user having the condition such that the user has the multi-test kit available before the user wants to test for the condition.

[0052] In some embodiments, the plurality of test boxes can include access to a telehealth platform for test guidance or screening guidance, access to digital diagnostic tools, a lab report, and/or access to treatment based on results of the diagnostic test or screening results. In some embodiments, the lab report can include a portable document format (PDF), and/or any other document format transmitted to the user by email, text message, mail, push notification, and/or any other message format.

[0053] In some embodiments, based on the results of the diagnostic test, the multi-test kit and/or the telehealth platform can provide the user access to a results consultation. The results consultation can include synchronous and/or asynchronous contact between the user and a healthcare provider, a proctor, and/or an artificial intelligence (Al) based assistant or proctor. The synchronous and/or asynchronous contact can include messaging via the telehealth platform, email, text messaging, a phone call, a video call, an in-person consultation, and/or any other synchronous and/or asynchronous contact. In some embodiments, the results consultation can assist the user with reading or interpreting the results of the diagnostic test. In some embodiments, the telehealth platform can use Al, machine learning (ML), and/or computer vision algorithms (CV) to assist the user with reading or interpreting the results of the diagnostic test.

[0054] In some embodiments, the plurality of test boxes can include one or more single-use and/or multi-use items such as thermometers or other sensors, reference cards, and/or any other single-use or multi-use diagnostic testing items. In some embodiments, the condition can include COVID-19, urinary tract infection (“UTI”), influenza (“Flu”), streptococcus (“Strep”), and or any other medical condition. [0055] In some embodiments, the plurality of test boxes can each include different contents or equipment depending on which condition the test box is associated with.

[0056] In some embodiments, based on the results of the diagnostic test, the multi-test kit and/or the telehealth platform can provide access to a treatment plan. The treatment plan can include a prescription, over-the-counter medication, supplements, lifestyles changes, a follow up appointment, and/or any other condition treatments.

[0057] As shown in FIGS. 5 and 6, one or more conditions, such as flu, strep, etc., may not have approved at-home tests available to users. In some embodiments, the diagnostic tests for the one or more condition can include a screening instead of a test.

[0058] The test boxes for the one or more conditions can include single-use or multiuse items, such as sensors (e.g., thermometers) for obtaining user health information (e.g., temperature), a tongue depressor, reference cards, etc. In some embodiments, the user can use the tongue depressor to keep a tongue of the user out of the way while capturing one or more images of a mouth of the user.

[0059] In some embodiments, the screening can include hardware or sensors to assist the user with obtaining the user health information. The hardware or sensors can include user devices attachments, and/or any other hardware or sensors. In some embodiments, the screening and/or the telehealth platform can include Al, ML, and/or CV based tools to assist the user with one or more steps of a screening process.

[0060] In some embodiments, the screening can include access to the telehealth platform. The telehealth platform can provide the user with a proctored telehealth session. A proctor can guide the user through one or more steps of the screening process. In some embodiments, the screening process can include a clinical intake. The clinical intake can include a survey, a questionnaire, or any form with questions or inputs related to the condition or related conditions. In some embodiments, the proctor can ask the user questions or inputs of the clinical intake and the user can provide responses to the proctor. The proctor can enter the responses into the clinical intake. In some embodiments, the clinical intake can be displayed to the user via the telehealth platform, a website, and/or an application, and the user can complete the clinical intake. In some embodiments, the proctor can guide the user though completing the clinical intake. In some embodiments, the screening can include online- or application-based digital diagnostic testing (e.g., cough detection, CV imaging, etc.).

[0061] In some embodiments, results of the screening can be recorded in the telehealth platform by the proctor or automatically by the telehealth platform. In some embodiments, the telehealth platform can automatically generate and transmit a report including the results to the user. In some embodiments, the telehealth platform can automatically transmit the report to the proctor, a healthcare provider and/or any other medical professional. In some embodiments, the user, the proctor, the healthcare provider and/or other medical professional can use the report to facilitate a follow-up appointment via the telehealth platform or in-person. The follow up appointment can include a consultation or appointment with a physician, a healthcare provider and/or any other medical professional. In some embodiments, the telehealth platform can automatically generate a treatment plan and/or prescribe or order medication for the user.

[0062] As shown in FIG. 7, the prescription, treatment, or over-the-counter medication ordered for the user as a result of the diagnostic test or the screening can be delivered to the user via a courier of the telehealth platform and/or a third-party courier service. In some embodiments, the user can pick up the prescription, treatment, or over-the-counter medication from a pharmacy or local store.

[0063] In some embodiments, the user can make an up-front payment for the multi-test kit and/or each of the diagnostic test boxes. The up-front payment can include a cost of each of a plurality of steps of a diagnostic test or screening, the equipment or contents of the diagnostic test boxes, treatments included in the diagnostic test boxes, prescriptions, over-the-counter medication, results consultations, a follow-up appointment and/or any other steps or services of the diagnostic testing or screening process.

[0064] Advantageously, the up-front payment can make healthcare costs more predictable for users than in-person healthcare and/or traditional telehealth healthcare. Additionally, the user can make the up-front payment to one entity instead of making multiple payment through the telehealth process to multiple different entities (e.g., a physician, lab, pharmacy, etc.). The up-front payment can reduce a complexity of healthcare treatment for the user and/or a financial burden of the user associated with medical advice and/or treatment. The up-front payment and/or the multi-test kit can improve access to quality care. [0065] FIG. 8 is a flowchart that illustrates an example method of performing a diagnostic test or screening using a multi-test kit. At step 802, a user can select a diagnostic test kit from the multi-test kit based on one or more symptoms experienced by the user, guidance from a telehealth platform based on the one or more symptoms experienced by the user, instructions provided by an employer (e.g., instructions to take a test for an infectious disease or instructions to take a drug test), or so forth. The test kit can include a test for, for example and without limitation, COVID-19, influenza, a UTI, streptococcal pharyngitis, mononucleosis, human papillomavirus, one or more illicit drugs, and so forth. At step 804, the user can capture an image of the diagnostic test box or a code on the diagnostic test box with a camera of a user device (e.g., a smartphone, tablet, etc.). At step 806, imaging the diagnostic test box or the code can cause the user to be directed to a website and/or application. The website and/or application can include information about the diagnostic test box, a test or screening procedure, and/or contents of the diagnostic test box. In some embodiments, the website and/or application can automatically compare an expiration date of the diagnostic test box with a current date to determine if the diagnostic test box is expired.

[0066] At step 808, the user can open the diagnostic test box. The diagnostic test box can include instructions, a diagnostic test, thermometer, sensors, instruction card, test results card, link or code associated with a website or application where telehealth services are provided, and/or any other materials or equipment for a diagnostic test or screening.

[0067] At step 810, the user can select or confirm which diagnostic test box (also referred to herein as a test kit container) the user selected. In some embodiments, the user can select from a list of tests, or the telehealth platform can automatically determine the diagnostic test box the user selected based on the image of the diagnostic test box or the code.

[0068] At step 812, the user can perform one or more steps of a diagnostic test or screening (e.g., video call with proctor, voice call with proctor, audio and/or visual cues, augmented reality (AR) graphical instructions, etc.) via the telehealth platform or a connection to the telehealth platform. One or more of the steps of the diagnostic test or screening can include the user providing answers to clinical questions, taking a temperature of the user, capturing one or more images of a mouth, a throat, and/or a face of the user, answering questions about symptoms, and/or any other steps of diagnostic testing or screening.

[0069] At step 814, the user can receive results of the diagnostic test or screening. In some embodiments, the user can interpret the results. In some embodiments, a proctor or the telehealth platform via computer vision, artificial intelligence (Al), and/or machine learning (ML) can interpret the results.

[0070] At step 816, the user can be provided with one or more next steps. For example, based on the results, the user can qualify for one or more treatments or additional healthcare services. The telehealth platform can provide the user with a results consultation, a prescription or order for medication or supplements, and/or any other treatment. In some embodiments, the user can pick up a treatment or the treatment can be delivered to the user.

Telehealth Testing Procedures

[0071] In some embodiments, the telehealth systems described herein can provide self-guided urinary tract infection (UTI) testing. In some embodiments, the system can automatically confirm an identity of a user using one or more images of the user, an identification of the user, or both. In some embodiments, the system can automatically verify a test result by verifying a test strip or test kit. In some embodiments, the system can automatically provide a diagnosis. In some embodiments, the system can automatically order one or more recommended treatments for a user. In some embodiments, the system can automatically direct the user to a clinician for follow up care, for example to determine if a prescription or over the counter treatment is warranted. In some embodiments, if a prescription treatment is warranted, the system can automatically place an order for the prescription at a pharmacy selected by the user.

[0072] In some embodiments, a user can receive a test strip, instructions for use (IFU), a test results interpretation card, or any combination of these items. In some embodiments, the results interpretation card can include one or more computer vision features. In some embodiments, the telehealth platform or the user can capture one or more images of the results interpretation card via a camera of the user device. In some embodiments, the system can analyze the one or more computer vision features using artificial intelligence (Al), machine learning (ML), and/or computer vision algorithms, as described in more detail herein.

[0073] In some embodiments, a test kit container, test results card, or both can include a machine-readable code (e.g., a QR code, bar code, etc.) that can be scanned with a user device to initiate a telehealth session. In some embodiments, when a user scans the machine-readable code to begin a test session, the telehealth platform can automatically direct the user to a sign in page or account creation page. In some embodiments, if a user is already logged in, the system can direct the user to a next step in the testing procedure, as described in more detail below. In some embodiments, if the user does not have an account, the telehealth platform can prompt the user to create an account. In some embodiments, the user can create a username and/or password for the account. In some embodiments, the user can sign on using a single sign on method. In some embodiments, a user can sign on using an alternative to passwords, such as a passkey.

[0074] In some embodiments, the telehealth platform can prompt the user to input user information. The user information can include, for example, a telehealth questionnaire, a clinical questionnaire, a preferred pharmacy, and so forth. In some embodiments, if the user already has an account and the user has previously input the user information, the telehealth platform can skip asking the user to input user information.

[0075] In some embodiments, the telehealth platform can automatically analyze a camera and/or a microphone of the user device to determine if the camera and/or microphone work properly, meet minimum specifications, and so forth. In some embodiments, a proctor and/or the telehealth platform can prompt the user to verify the user’s identity. In some embodiments, the user can capture one or more images of an identification of the user (e.g., a passport or driver license), an image of the user, or both, and the telehealth platform and/or the proctor can analyze the identification, the image of the user, or both to verify the user’s identity. In some embodiments, the telehealth platform can automatically verify the identity of the user by analyzing the identification, the image of the user, or both using an ML algorithm or computer vision algorithm. In some embodiments, if the user previously interacted with the telehealth platform, the telehealth platform can have an image of the identification, the user, or both stored in a user account data. In some embodiments, the telehealth platform and/or the proctor can compare the captured one or more images of the identification, the user, or both to the stored image of the identification, the use, or both. In some embodiments, if the captured one or more images of the identification, the user, or both match the stored image of the identification, the user, or both, the telehealth platform can confirm the identity of the user.

[0076] In some embodiments, after the telehealth platform confirms the users identity, the telehealth platform can confirm the test strip is a same strip the user started the test session with and confirm the test strip has not expired. In some embodiments, the user can scan or capture an image of the machine-readable code on the test kit container, the test results interpretation card, etc., to confirm that the machine-readable code is the same machine-readable code that the user scanned to begin the test session. In some embodiments, the user can input the expiration into the telehealth platform. In some embodiments, the user can input the expiration date verbally via the microphone of the user device. In some embodiments, the machine-readable code can contain the expiration date and/or can contain information that can be used to determine the expiration date, for example a unique identifier that can be used to query a database that contains expiration date information. In some embodiments, the telehealth platform can be configured to automatically decode the machine-readable code. In some embodiments, the telehealth platform can compare the expiration date to a current date to confirm that the test has not expired.

[0077] In some embodiments, if the telehealth platform confirms the test strip is the same test strip as the one the user started the test session with and confirms the test strip has not expired, the user can remove the test strip, IFU, and test results interpretation card from the test kit container. In some embodiments, the telehealth platform and/or the proctor can confirm that the test kit container contains necessary components, for example by prompting the user to capture one or more images of the test strip, IFU, results interpretation card, and so forth. In some embodiments, the telehealth platform may make use of a video feed to observe the test kit components.

[0078] In some embodiments, after the telehealth platform and/or the proctor confirm that contents of the test kit container, the telehealth platform and/or the proctor can provide an explanation of one or more next steps to the user. In some embodiments, the telehealth platform can cause text and/or graphics to be displayed on a display of the user device. In some embodiments, the proctor can verbally provide an explanation of one or more next steps to the user. In some embodiments, the one or more next steps can include one or more steps of a self-guided diagnostic test. In some embodiments, providing instructions regarding next steps can reduce the number of user errors when performing self-guided test steps.

[0079] In some embodiments, after the explanation of the one or more next steps, the proctor can disconnect from the video session so that the proctor cannot observe the user and the user cannot observe the proctor. In some embodiments, the proctor can maintain a connection to the video session but may not be able to share and/or receive audio and/or video.

[0080] In some embodiments, the user can perform one or more self-guided test steps to capture a test sample with the test strip. In some embodiments, the user can capture the test sample by urinating on the test strip. In some embodiments, the telehealth platform can prompt the user to wait for a predetermined time after capturing the test sample. In some embodiments, the telehealth platform can cause a timer to be displayed on the display of the user device. In some embodiments, the timer can count up to the predetermined time. In some embodiments, the timer can count down from the predetermined time. In some embodiments, the timer may start in response to a user tapping or otherwise selecting a button or similar input using the user device. In some embodiments, the predetermined time can be 1 s or about 1 s, 5 s or about 5 s, 10 s or about 10 s, 20 s or about 20 s, 30 s or about

3 s, 1 minute or about 1 minute, 2 minutes or about 2 minutes, 3 minutes or about 3 minutes,

4 minutes or about 4 minutes, 5 minutes or about 5 minutes, 10 minutes or about 10 minutes, 15 minutes or about 15 minutes, 20 minutes or about 20 minutes, 30 minutes or about 30 minutes, 45 minutes or about 45 minutes, 1 hour or about 1 hour, or any value between these values, or more.

[0081] In some embodiments, after the predetermined time has elapsed, the telehealth platform can prompt the user to capture one or more images of the test strip. In some embodiments, the user can capture the one or more images of the test strip without a prompt from the telehealth platform.

[0082] In some embodiments, the proctor can reconnect to the video conference and/or can reenable video and/or audio. In some embodiments, the video conference can be automatically reestablished in response to the predetermined time elapsing and/or in response to the user capturing one or more images of the test strip. In some embodiments, the same proctor can join the video conference or stay on the video conference, which can reduce connection or wait times for the user. In some embodiments, a different proctor can join the video conference or establish a new video conference, for example if the previous proctor is unavailable.

[0083] In some embodiments, the proctor can analyze the one or more images of the test strip to interpret a test result. In some embodiments, the telehealth platform can automatically analyze the one or more images of the test strip to interpret the test, for example using a machine learning and/or computer vision algorithm. In some embodiments, the proctor and/or the telehealth platform can guide the user through analyzing and/or interpreting the test result. In some embodiments, the proctor can input an analysis or interpretation of the test result into the telehealth platform.

[0084] In some embodiments, the proctor can discuss the test result with the user. In some embodiments, the telehealth platform can provide a summary of the result, a diagnosis, recommended treatment, and/or other information, which can be accessed by the user via the user device. In some embodiments, the proctor and/or the telehealth platform can provide explanation of one or more next steps to the user.

[0085] In some embodiments, after the proctor and/or telehealth platform explains the one or more next steps to the user, the proctor can end the video conference, or the telehealth platform can automatically end the video conference. [0086] In some embodiments, the telehealth platform can automatically transmit the test result, diagnosis, and/or other test information to a medical provider or clinician partner. In some embodiments, the medical provider or clinician partner can review the received information to determine a second diagnosis and/or to confirm or reject the original diagnosis. In some embodiments, the medical provider or clinician partner can provide a rating of the diagnosis, for example a star rating. Such ratings can be used to, for example, evaluate the performance of a human proctor, to evaluate the performance of machine learning and/or computer vision algorithms, and so forth.

[0087] In some embodiments, the medical provider or clinician partner can initiate an asynchronous telehealth consultation with the user. In some embodiments, the asynchronous telehealth consultation can include one or more messages (e.g., phone calls, emails, text messages, etc.) between the medical provider or clinician partner and the user. In some embodiments, asynchronous telehealth consultation can result in the medical provider or clinician partner confirming the recommended treatment. In some embodiments, the telehealth platform can automatically order the recommended treatment for the user in response to the medical provider or clinician partner confirming the recommended treatment. In some embodiments, the recommended treatment can include a prescription that the telehealth platform can automatically transmit to a pharmacy. In some embodiments, the medical provider or clinician partner may determine that the recommended treatment is not warranted, and the recommended treatment may not be ordered and/or an alternative treatment may be ordered. In some embodiments, the medical provider or clinician partner can recommend that the user schedule a follow up appointment, retest after a period of time, or take other actions.

[0088] Figures 9A and 9B illustrate an example process for UTI testing according to some embodiments. At step 902, a user can scan a machine-readable code on a test kit box with a user device. At step 904, the user can sign in or create an account with a telehealth service (also referred to herein as a telehealth platform). At step 906, the user can complete a telehealth service questionnaire (which can include, for example, the user’s name, contact information, etc.). At step 908, the user can complete a clinical questionnaire (which can include questions related to, for example, current conditions, past conditions, past surgeries, current medications, past medications, known drug allergies, and so forth). At step 910, the user can select a preferred pharmacy. At step 912, the system can perform an automated check of user device capabilities, for example to confirm that the user device has a camera and microphone that work and that meet minimum requirements. At step 914, the telehealth platform can present the user with a welcome. In some embodiments, the welcome may be provided by a proctor after initiation of a video conference session. At step 916, the proctor and/or the system can determine if the user’s identity has been verified. If the user’s identity has not been verified, steps 946, 948, and 950 can be performed by a proctor or automatically by the telehealth platform to verify identification, capture and store an image of the user and/or the user’s identification, and to mark the user as verified. If the user’s ID is verified, the process can continue at step 918 to verify that the user is the same user as the user whose ID was verified. If not, the process can proceed to steps 946, 958, and 950 to verify the user. If the user’s ID is verified and the user matches a stored image, the process can continue to step 920 to confirm test kit and expiration date. At step 922, the user can unpack the test kit and check the components. In some embodiments, a proctor can verify that required components are present. In some embodiments, machine learning and/or computer vision can be used to automatically determine that required components of the test kit are present. At step 924, the proctor and/or the telehealth platform can explain one or more next steps to the user. After step 924, the video session can be paused or ended. At step 926, the user can perform one or more self-guided test steps. At step 928, the user can wait a predetermined time. At step 930, the user can capture one or more images of a test strip or card. In some embodiments, at step 930, the video conference session can be resumed or restarted, and the user can show the test strip or card in the video conference session. At step 932, the proctor can interpret the results. In some embodiments, alternatively or additionally, the telehealth platform can interpret the results, for example using machine learning and/or computer vision. At step 934, the proctor can wrap up the session and the telehealth platform can provide a summary for display on a screen of the user’s user device. At step 936, the telehealth platform can send data to a clinician partner. The data can include, for example, test results, test information, user information, etc. At step 936, the telehealth platform can receive a clinician rating of a diagnosis, recommended treatment, etc., provided by the telehealth platform (automatically and/or by a proctor). At step 940, the clinician partner can, based on a review of the information provided by the telehealth platform, an asynchronous consultation with the user, or both, determine whether or not to follow a recommended treatment. If the clinician partner agrees with the recommendation, the recommended treatment can be provided at step 942. If the clinician partner disagrees, the treatment may not be prescribed at step 944. In some embodiments, a clinician partner may prescribe an alternative treatment, provide a different diagnosis, advise the user on next steps (e.g., follow up, retesting, etc.).

[0089] In some embodiments, a telehealth platform can be used to perform an at-home test for streptococcal pharyngitis. A testing procedure can share many components of the process described above for UTI testing. However, in some embodiments, samples may not be collected, and a diagnosis may instead be made based on, for example, visual observation of the user, images collected of the user’s mouth and/or throat, information about the user’s temperature, and so forth. In some embodiments, a telehealth platform can verify a quality of a captured image. In some embodiments, the telehealth platform can verify a temperature of the patient. In some embodiments, the telehealth platform can automatically provide a diagnosis. In some embodiments, the telehealth platform can recommend one or more treatments. In some embodiments, the telehealth platform can order one or more treatments.

[0090] In some embodiments, a system can include a telehealth platform. The telehealth platform can include streptococcal pharyngitis (strep) testing. In some embodiments, a user can receive a thermometer, a tongue depressor, and/or an information card. In some embodiments, the information card can include a burnable voucher (e.g., the information card can contain a machine-readable code that can only be used a single time). In some embodiments, a test kit container can include a machine-readable code. In some embodiments, the telehealth platform can limit use of the machine-readable code to a single test. In some embodiments, the information card and/or test kit container can include one or more computer vision features. The telehealth platform or the user can capture one or more images of the information card and/or test kit container via a camera of a user device and analyze the one or more computer vision features via artificial intelligence, machine learning and/or a computer vision algorithms. [0091] In some embodiments, a test kit container can contain the thermometer, the tongue depressor, and/or the information card. The test kit container and/or the information card can include a machine-readable code. The machine-readable code can be a barcode, a QR code, and/or any other machine-readable code. In some embodiments, when a user scans the machine-readable code on the test kit container and/or the information card to begin a test session, the telehealth platform can automatically direct the user to a sign in and/or account creation page. If the user does not have an account, the telehealth platform can prompt the user to create an account. In some embodiments, the user can create a username and/or a password for the account.

[0092] In some embodiments, the telehealth platform can prompt the user to input user information. The user information can include a telehealth questionnaire, a clinical questionnaire, and/or a preferred pharmacy. If the user already has an account and the user previously input the user information, the telehealth platform can automatically skip prompting the user to input the user information.

[0093] In some embodiments, the telehealth platform can automatically analyze a camera and/or a microphone of the user device to determine if the camera and/or microphone work properly. In some embodiments, the telehealth platform can automatically analyze the camera and/or the microphone to determine if the camera and/or microphone can capture images and/or audio with a quality at or above a threshold for telehealth testing.

[0094] In some embodiments, the telehealth platform can connect the user with a proctor via a video connection. In some embodiments, the proctor and/or the telehealth platform can prompt the user to verify the user’s identity. In some embodiments, the user can capture one or more images of an identification of the user and/or an image of the user and the proctor can analyze the identification and/or the image of the user. In some embodiments, the user can capture one or more images of the identification and/or the image of the user, and the telehealth platform can automatically analyze the one or more images of the identification and/or the image of the user via a machine learning and/or computer vision algorithm. In some embodiments, if the user previously interacted with the telehealth platform, the telehealth platform can have an image of the identification and/or the user stored in a user account database. In some embodiments, the telehealth platform and/or the proctor can compare the captured one or more images of the identification and/or the image of the user to the stored image of the identification and/or the stored image of the user. In some embodiments, if the captured one or more images of the identification and/or the image of the user match the stored image of the identification and/or the stored image of the user, the telehealth platform can confirm the identity of the user.

[0095] If the captured one or more images of the identification and/or the image of the user do not match the stored image of the identification and/or the stored image of the user, or the telehealth platform does not have an image of the identification and/or an image of the user stored in the user account database, the telehealth platform can prompt the user to establish the identity of the user. To establish the identity of the user, the user or the telehealth platform can capture and image of the user and the identification of the user, and the proctor can confirm the user and information on the identification match. The telehealth platform can automatically store the image of the user and confirm the identity of the user.

[0096] In some embodiments, after the telehealth platform confirms the identity of the user, the telehealth platform can confirm the test kit container is a same test kit container the user used to start the test session. In some embodiments, the telehealth platform can confirm the box has not expired. In some embodiments, the user can scan or capture an image of the machine-readable code on the test kit container. In some embodiments, the telehealth platform can automatically analyze the machine-readable code to confirm the machine-readable code is a same machine-readable code the user scanned to begin the test session. In some embodiments, the user can input the expiration into the telehealth platform. In some embodiments, the user can input the expiration verbally via the microphone of the user device. In some embodiments, the machine-readable code can contain the expiration, and the telehealth platform can automatically analyze the machine- readable code to extract the expiration. In some embodiments, the machine-readable code can include information that can be used to confirm the expiration date of the test kit. For example, the machine-readable code can include a unique identifier that can be used to query a database that contains the expiration date of the test kit. In some embodiments, the telehealth can compare the expiration to a current date to confirm the test kit has not expired. [0097] In some embodiments, the telehealth platform can automatically change a camera of the user device from a front facing camera to a rear facing camera. In some embodiments, the telehealth platform and/or the proctor can prompt the user to capture an image of the burnable voucher. In some embodiments, the telehealth platform can automatically determine when the burnable voucher is in view of the rear facing camera and automatically capture an image of the burnable voucher. In some embodiments, the telehealth platform can automatically analyze the image of the burnable voucher to confirm the box has not previously been used for a previous test session. In some embodiments, the burnable voucher can be the same as the machine-readable code.

[0098] In some embodiments, the telehealth platform can automatically change the camera of the user device from the rear facing camera to the front facing camera after the image of the burnable voucher is captured. In some embodiments, the telehealth platform and/or the proctor can provide an explanation of one or more next steps to the user. In some embodiments, the telehealth platform can display text and/or graphics to the user via the display of the user devices. In some embodiments, the proctor can verbally provide the explanation of the one or more next steps to the user.

[0099] In some embodiments, the telehealth platform and/or the user can capture one or more images and/or videos of a face, hand, and/or eyes of the user. In some embodiments, the telehealth platform and/or the user can capture audio of a voice of the user, for example using a microphone of the user device. In some embodiments, the telehealth platform and/or the user can capture the one or more images and/or videos, and the audio while the user palpates one or more lymph nodes. In some embodiments, the telehealth platform and/or the proctor can use the one or more images and/or videos, and the audio to determine a diagnosis.

[0100] In some embodiments, the proctor can observe the user while the user palpates the one or more lymph nodes to determine if the one or more lymph nodes are swollen and input the determination into the telehealth platform. In some embodiments, a machine learning model can be used to analyze received video of the user to determine if the user’s lymph nodes are swollen. [0101] In some embodiments, after the user palpates the one or more lymph nodes, the telehealth platform and/or the proctor can prompt the user to insert the thermometer into the mouth of the user. In some embodiments, the proctor can start a timer when the user inserts the thermometer into the mouth of the user, or the telehealth platform can analyze a video recording of the user inserting the thermometer into the mouth of the user to detect when the user inserts the thermometer into the mouth of the user and the telehealth platform can automatically start the timer. In some embodiments, the user can initiate a timer when the user inserts the thermometer into the user’s mouth. In some embodiments, the telehealth platform can display a timer, and the timer can count up to a predetermined time or the timer can count down from the predetermined time. In some embodiments, the predetermined time can be 1 s or about 1 s, 5s or about 5 s, 10 s or about 10 s, 20 s or about 20 s, 30 s or about 30 s, 1 min or about 1 min, 2 min or about 2 min, 3 min or about 3 min, 4 min or about 4 min, 5 min or about 5 min, 10 min or about 10 min, 15 min or about 15 min, 20 min or about 20 min, 30 min or about 30 min, 45 min or about 45 min, 1 hour about 1 hour, and/or any value between the aforementioned values or more.

[0102] In some embodiments, after the user waits for the predetermined time, the telehealth platform and/or the proctor can prompt the user to capture an image of the thermometer. In some embodiments, the user can place the thermometer on the information card. In some embodiments, the telehealth platform can automatically change the camera of the user device from the front facing camera to the rear facing camera and the user can capture an image of the thermometer on the information card. In some embodiments, the proctor can record a temperature displayed by the thermometer, or the telehealth platform can use artificial intelligence, machine learning and/or a computer vision algorithm to automatically determine the temperature displayed by the thermometer. For example, in some embodiments, a thermometer can be a digital thermometer including a display, and a machine learning and/or computer vision algorithm can be configured to extract the temperature from an image that includes the display. In some embodiments, a thermometer can use fluid expansion and computer vision and/or machine learning can be used to determine a temperature scale indicated on the thermometer and a level of the fluid in a capillary. In some embodiments, a thermometer can use dots or other indicators that can be read using a computer vision and/or machine learning model. [0103] In some embodiments, the telehealth platform and/or the proctor can prompt the user to input whether the user has a caretaker present. The caretaker can include a parent of the user, a significant other of the user, and/or any other person physically present with the user. If the user has a caretaker present, the telehealth platform and/or the proctor can prompt the caretaker to capture an image of the throat of the user. If the user does not have a caretaker present, the telehealth platform and/or the proctor can prompt the user to capture an image of the throat of the user with the rear facing camera of the user device. In some embodiments, the telehealth platform and/or the proctor can prompt the user to capture an image of the user’s throat with a front facing camera of the user device. In some embodiments, the user can capture the image of the throat by capturing an image of the throat in a mirror. In this way, the user can use the rear facing camera to capture an image. Often, a rear facing camera can have a greater resolution, larger sensor, etc., than a front facing camera. In some embodiments, the user or the caretaker can insert a tongue depressor into the mouth when the user or the caretaker captures the image. In some embodiments, the telehealth platform and/or the proctor can provide instruction to the user and/or the caretaker to capture the image.

[0104] In some embodiments, after the telehealth platform and/or the proctor prompts or instructs the user, the proctor can turn off a video stream of the video conference, and the rear facing camera can capture a high resolution and/or high-quality image.

[0105] In some embodiments, the user or the caretaker can capture the image of the throat. After the user or the caretaker captures the image of the throat, the user can review and analyze a quality of the image. If the quality is below a predetermined threshold, the user or the caretaker can retake the image of the throat and review the quality until the quality is above the predetermined threshold. If the quality is above the predetermined threshold, the user can input a confirmation that the quality is above the predetermined threshold. In some embodiments, the telehealth platform can transmit the image to the proctor. The proctor and/or the telehealth platform can review and analyze the quality of the image. If the quality is below a predetermined threshold, the user or the caretaker can retake the image of the throat and review the quality until the quality is above the predetermined threshold. If the quality is above the predetermined threshold, the proctor and/or the telehealth platform can confirm that the quality is above the predetermined threshold, review the image, etc.

[0106] In some embodiments, the proctor and/or can turn on the video stream of the video conference. In some embodiments, the telehealth platform can automatically analyze the image to determine a diagnosis. In some embodiments, the proctor can discuss the diagnosis with the user and the telehealth platform can display the diagnosis, recommended treatment, and/or other test information. In some embodiments, the proctor and/or the telehealth platform can explain one or more next steps to the user.

[0107] In some embodiments, after the proctor and/or the telehealth platform explains the one or more next steps to the user, the proctor can end the video conference, or the telehealth platform can automatically end the video conference.

[0108] In some embodiments, after the proctor or the telehealth platform end the video conference, the telehealth platform can automatically transmit the image, the diagnosis, the recommended treatment, and/or other test information to a medical provider or clinician partner.

[0109] In some embodiments, the medical provider or clinician partner can analyze the image, the diagnosis, and/or the other test information to determine a second diagnosis. In some embodiments, the medical provider or clinician partner can provide a rating of the diagnosis. The rating can include a star rating.

[0110] In some embodiments, the medical provider or clinician partner can initiate an asynchronous telehealth consultation with the user. In some embodiments, the asynchronous telehealth consultation can include one or more message between the medical provider or clinician partner and the user. In some embodiments, the asynchronous telehealth consultation can result in the medical provider or clinician partner confirming the recommended treatment to enable the telehealth platform to automatically order the recommended treatment for the user. In some embodiments, the recommended treatment can include a prescription that the telehealth platform can automatically transmit to a pharmacy. [0111] In some embodiments, the telehealth consultation can result in the medical provider or clinician partner determining the user does not require the recommended treatment or the medical provider or clinician partner can recommend the user schedule a follow up appointment.

[0112] Figures 10A-10C illustrate an example process for conducting a streptococcal pharyngitis test according to some embodiments. At step 1002, a user can scan a machine- readable code on a test kit container using a camera of a user device. At step 1004, the user can sign in to or create an account with a telehealth service. For example, in response to scanning the code, the user can be directed to a web page or application for interacting with the telehealth service. At step 1006, the user can complete a telehealth service questionnaire. At step 1008, the user can complete a clinical questionnaire. At step 1010, the user can select a preferred pharmacy. In some embodiments, steps 1006, 1008, and 1010 can be skipped if the user has previously completed these steps. At step 1012, the telehealth service can cause an automated check of user device capabilities to be performed on the user device. For example, the telehealth service can check that a rear facing camera, front facing camera, and/or microphone of the user device meets minimum requirements. At step 1014, the telehealth service can cause a welcome screen to be displayed to the user on a display of the user device. In some embodiments, a video conference session can be initiated. At step 1016, the system can determine if an identification document of the user has been verified. At step 1018, the system can determine if the user (e.g., as observed via the video conference session) matches an image of the user stored by the telehealth service. If the ID is not verified, the user does not match the stored image, or both, the telehealth platform can carry out a verification procedure by following steps 1066, 1068, and 1070. At step 1066, an identification document procedure can be followed to verify the user’s identification. For example, in some embodiments, a user can provide an image of an identification document. In some embodiments a proctor can extract information from the identification document. In some embodiments, a machine learning and/or computer vision algorithm can extract information from the identification document, such as name, date of birth, driver license number, etc. At step 1068, the telehealth platform can capture and store an image of the user, for example by capturing one or more frames of the video conference session and/or prompting the user to capture one or more images of the user. At step 1070, after verifying the user’s identification and capturing and storing an image of the user, the telehealth service can mark the user as verified. In some embodiments, a proctor may make the user as verified. In some embodiments, the telehealth service can automatically mark the user as verified in response to a proctor indicating that the captured image of the user matches the identification document. In some embodiments, a machine learning and/or computer vision algorithm can be used to compare an image of the user in the identification document with the captured image of the user.

[0113] If, at step 1018, the user identification is verified and the user matches the stored image, the telehealth service can confirm the test kit container and expiration date at step 1020. At step 1022, the user can unpack and check the components of the test kit included in the test kit container. In some embodiments, a proctor may observe the components to verify that all necessary components are present. At step 1024, the telehealth service can provide instructions to cause the phone to capture video from a rear facing camera of the user device, and the user can capture an image of the voucher code or otherwise scan the voucher code. The voucher code can be a machine-readable code. In some embodiments, the voucher code can be located on the test kit container, instructions for use, or any other component of the test kit. In some embodiments, the voucher code can be the same as the machine-readable code used to initiate the test. For example, the telehealth service may be configured to permit the test kit to be used only once. In some embodiments, rather than scanning the machine-readable code a second time or scanning a different voucher code, the system can be configured to mark the test kit as used at a defined point in the testing process, for example after the expiration date has been confirmed and the test kit has been opened and the components verified. This can enable a user who abandons a testing session to resume or restart the testing session at a later time without needing to acquire a second test kit. At step 1026, the telehealth service can explain one or more next steps to the user. In some embodiments, the telehealth service can provide text, video, images, drawings, augmented reality content, etc., to be displayed on the user device. In some embodiments, a proctor can review one or more next steps with the user.

[0114] At step 1028, the user can palpate the user’s lymph nodes while the proctor observes. At step 1030, the proctor can record the lymph node results (e.g., noting whether or not the user’s lymph nodes were swollen). At step 1032, the user can place a thermometer in the mouth of the user. At step 1034, the user can wait for a period of time (e.g., to wait for the temperature measured by the thermometer to stabilize). At step 1036, the proctor can record the temperature. Additionally or alternatively, machine learning and/or computer vision algorithms can be used to read the temperature indicated by the thermometer.

[0115] At step 1038, the telehealth service can determine if a caretaker is present. In some embodiments, the telehealth service can provide a question to the user and receive a response indicating if a caretaker is present. In some embodiments, a proctor may ask if or observe that a caretaker is present. If a caretaker is present, at step 1040, the telehealth service can explain an image capture procedure to be performed with the aid of a caretaker. If no caretaker is present, at step 1042, the telehealth service can provide an explanation of an image capture procedure to be performed without assistance from a third party.

[0116] At step 1044, the user and/or the caretaker can capture one or more images of the user’s throat. At step 1046, the user and/or the caretaker can review the captured one or more images. At step 1048, the user can confirm a quality of the captured one or more images. If the images are not of sufficient quality, the user and/or caretaker can retake the one or more images. If the images are of sufficient quality, the user device and/or the telehealth service can transmit the one or more images to the proctor for review at step 1050. At step 1052, the proctor can confirm the image quality. If the image quality is insufficient, the telehealth service can instruct the user to recapture an image. If the quality is confirmed, the proctor can wrap up the test session, and the telehealth service can provide a summary to the user at step 1054. At step 1056, the telehealth service can send data collected during the testing process to a clinician partner. The data can include, for example, information supplied by the user, observations made by the proctor, images captured by the user, and so forth. At step 1058, the telehealth service can receive a clinician rating. The clinician rating can indicate, for example, an opinion of the clinician partner regarding the quality of the captured photographs, the accuracy of the proctor’s observations, etc. At step 1060, the clinician partner can consult with the user and, if treatment is warranted in the clinician partner’s opinion, the treatment can be prescribed at step 1062. If not, treatment may not be prescribed at step 1064. In some embodiments, additional instructions can be provided, such as instructions to follow up with a provider, retest after a period of time, and so forth.

[0117] In some embodiments, a telehealth platform can be configured to enable influenza (flu) testing. In some embodiments, the telehealth platform can verify a user identity. In some embodiments, the telehealth platform can verify a temperature of the user. In some embodiments, the telehealth platform can provide a diagnosis. In some embodiments, the telehealth platform can provide a recommended treatment. In some embodiments, the telehealth platform can order the recommended treatment.

[0118] In some embodiments, a system can include a telehealth platform. The telehealth platform can include influenza (flu) testing. In some embodiments, a user can receive a thermometer, and/or an information card. In some embodiments, the information card can include a burnable voucher. The information card can include one or more computer vision features. The telehealth platform or the user can capture one or more images of the information card via a camera of a user device and analyze the one or more computer vision features via artificial intelligence, machine learning and/or a computer vision algorithm.

[0119] In some embodiments, a test kit box can contain the thermometer, and/or the information card. The test kit box and/or the information card can include a machine- readable code. The machine-readable code can be a barcode, a QR code, and/or any other machine-readable code. In some embodiments, when a user scans the machine-readable code on the box and/or the information card to begin a test session, the telehealth platform can automatically direct the user to a sign in and/or account creation page. If the user does not have an account, the telehealth platform can prompt the user to create an account. In some embodiments, the user can create a username and/or a password for the account.

[0120] In some embodiments, the telehealth platform can prompt the user to input user information. The user information can include a telehealth questionnaire, a clinical questionnaire, and/or a preferred pharmacy. If the user already has an account and the user previously input the user information, the telehealth platform can automatically skip prompting the user to input the user information. [0121] In some embodiments, the telehealth platform can automatically analyze a camera and/or a microphone of the user device to determine if the camera and/or microphone work properly. In some embodiments, the telehealth platform can automatically analyze the camera and/or the microphone to determine if the camera and/or microphone can capture images and/or audio with a quality at or above a threshold for telehealth testing.

[0122] In some embodiments, the telehealth platform can connect the user with a proctor via a video connection. The proctor and/or the telehealth platform can prompt the user to verify the user identity. In some embodiments, the user can capture one or more images of an identification of the user and/or an image of the user and the proctor can analyze the identification and/or the image of the user. In some embodiments, the user can capture one or more images of the identification and/or the image of the user and the telehealth platform can automatically analyze the one or more images of the identification and/or the image of the user via a computer vision algorithm. In some embodiments, if the user previously interacted with the telehealth platform, the telehealth platform can have an image of the identification and/or the user stored in a user account database. The telehealth platform and/or the proctor can compare the captured one or more images of the identification and/or the image of the user to the stored image of the identification and/or the stored image of the user. If the captured one or more images of the identification and/or the image of the user match the stored image of the identification and/or the stored image of the user, the telehealth platform can confirm the identity of the user.

[0123] If the captured one or more images of the identification and/or the image of the user do not match the stored image of the identification and/or the stored image of the user, or the telehealth platform does not have an image of the identification and/or an image of the user stored in the user account database, the telehealth platform can prompt the user to establish the identity of the user. To establish the identity of the user, the user or the telehealth platform can capture and image of the user and the identification of the user, and the proctor can confirm the user and information on the identification match. The telehealth platform can automatically store the image of the user and confirm the identity of the user.

[0124] In some embodiments, after the telehealth platform confirms the identity of the user, the telehealth platform can confirm the box is a same box the user started the test session with and confirm the box has not expired. In some embodiments, the user can scan or capture an image of the machine-readable code on the box. The telehealth platform can automatically analyze the machine-readable code to confirm the machine-readable code is a same machine-readable code the user scanned to begin the test session. In some embodiments, the user can input the expiration into the telehealth platform. The user can input the expiration verbally via the microphone of the user device. In some embodiments, the machine-readable code can contain the expiration, and the telehealth platform can automatically analyze the machine-readable code to extract the expiration. The telehealth can compare the expiration to a current date to confirm the box has not expired.

[0125] In some embodiments, the telehealth platform can automatically change a camera of the user device from a front facing camera to a rear facing camera. The telehealth platform and/or the proctor can prompt the user to capture an image of the burnable voucher. The telehealth platform can automatically determine when the burnable voucher is in view of the rear facing camera and automatically capture an image of the burnable voucher. The telehealth platform can automatically analyze the image of the burnable voucher to confirm the box has not previously been used for a previous test session.

[0126] In some embodiments, the telehealth platform can automatically change the camera of the user device from the rear facing camera to the front facing camera after the image of the burnable voucher is captured. In some embodiments, the telehealth platform and/or the proctor can provide an explanation of one or more next steps to the user. In some embodiments, the telehealth platform can display text and/or graphics to the user via the display of the user devices. In some embodiments, the proctor can verbally provide the explanation of the one or more next steps to the user.

[0127] In some embodiments, the telehealth platform and/or the user can capture one or more images and/or videos of a face, hand, and/or eyes of the user. In some embodiments, the telehealth platform and/or the user can capture audio of a voice of the user. In some embodiments, the telehealth platform and/or the proctor can use the one or more images and/or videos, and the audio to determine a diagnosis. [0128] In some embodiments, the telehealth platform and/or the proctor can prompt the user to cough into the microphone. The telehealth platform can record audio of the cough and automatically analyze the audio of the cough to determine the diagnosis.

[0129] In some embodiments, after the user coughs into the microphone, the telehealth platform and/or the proctor can prompt the user to insert the thermometer into the mouth of the user. In some embodiments, the proctor can start a timer when the user inserts the thermometer into the mouth of the user, or the telehealth platform can analyze a video recording of the user inserting the thermometer into the mouth of the user to detect when the user inserts the thermometer into the mouth of the user and the telehealth platform can automatically start the timer. In some embodiments, the telehealth platform can display a timer, and the timer can count up to a predetermined time or the timer can count down from the predetermined time. In some embodiments, the predetermined time can be 1 s or about 1 s, 5 s or about 5 s, 10 s or about 10 s, 20 s or about 20 s, 30 s about 30 s, 1 min or about 1 min, 2 min or about 2 min, 3 min or about 3 min, 4 min or about 4 min, 5 min or about 5 min, 10 min or about 10 min, 15 min or about 15 min, 20 min or about 20 min, 30 min or about 30 min, 45 min or about 45 min, 1 hour or about 1 hour, and/or any value between the aforementioned values, or more.

[0130] In some embodiments, after the user waits for the predetermined time, the telehealth platform and/or the proctor can prompt the user to capture an image of the thermometer. In some embodiments, the user can place the thermometer on the information card. The telehealth platform can automatically change the camera of the user device from the front facing camera to the rear facing camera and the user can capture an image of the thermometer on the information card. In some embodiments, the proctor can record a temperature displayed by the thermometer, or the telehealth platform can use artificial intelligence, machine learning and/or a computer vision algorithm to automatically determine the temperature displayed by the thermometer.

[0131] In some embodiments, the telehealth platform can automatically analyze the audio of the cough and/or the temperature to determine a diagnosis. In some embodiments, the proctor can discuss the diagnosis with the user and the telehealth platform can display the diagnosis, recommended treatment, and/or other test information. In some embodiments, the proctor and/or the telehealth platform can explain one or more next steps to the user.

[0132] In some embodiments, after the proctor and/or the telehealth platform explains the one or more next steps to the user, the proctor can end the video conference, or the telehealth platform can automatically end the video conference.

[0133] In some embodiments, after the proctor or the telehealth platform end the video conference, the telehealth platform can automatically transmit the temperature, the audio of the cough, the diagnosis, the recommended treatment, and/or other test information to a medical provider or clinician partner.

[0134] In some embodiments, the medical provider or clinician partner can analyze the audio of the cough, the temperature, the diagnosis, and/or the other test information to determine a second diagnosis. In some embodiments, the medical provider or clinician partner can compare the second diagnosis and the diagnosis to determine a rating of the diagnosis. The rating can include a star rating.

[0135] In some embodiments, the medical provider or clinician partner can initiate an asynchronous telehealth consultation with the user. In some embodiments, the asynchronous telehealth consultation can include one or more messages between the medical provider or clinician partner and the user. In some embodiments, the asynchronous telehealth consultation can result in the medical provider or clinician partner confirming the recommended treatment to enable the telehealth platform to automatically order the recommended treatment for the user. In some embodiments, the recommended treatment can include a prescription that the telehealth platform can automatically transmit to a pharmacy.

[0136] In some embodiments, the telehealth consultation can result in the medical provider or clinician partner determining the user does not require the recommended treatment or the medical provider or clinician partner can recommend the user schedule a follow up appointment.

[0137] Figures 11A-11 B illustrate an example process for conducting an influenza screening according to some embodiments. Like the strep screening described above, in some cases there may not be an at home diagnostic test for influenza. Thus, in some embodiments, a user may carry out various screening activities, such as checking temperature, in order to evaluate the user.

[0138] At step 1102, the user can scan a machine-readable code on a test kit container using a camera of a user device. At step 1104, the user can sign in to or create an account with a telehealth platform. At step 1106, the user can complete a telehealth service questionnaire. At step 1108, the user can complete a clinical questionnaire which can include questions such as, for example, medical history, current symptoms, how long current symptoms have lasted, etc. At step 1110, the user can select a pharmacy. At step 1112, the telehealth platform can perform an automated check of the capabilities of the user device, for example to ensure that the camera and a microphone of the user device meet minimum requirements. At step 1114, the telehealth platform can provide the user with a welcome, which can be a video, images, text, a live interaction with a proctor, etc. In some embodiments, at step 1114, a video conference session can be established between a user and a proctor. At steps 1116 and 1118, the telehealth platform can verify the user’s identity. If there are issues with verifying the user’s identity, steps 1146, 1148, and 1150 can be carried out to verify the identity of the user. At step 1120, the telehealth platform can confirm the test kit container and expiration date of the test kit. At step 1122, the user can unpack the test kit and check the components of the test kit. In some embodiments, a proctor may observe the test kit components to verify that necessary components are present, not damaged, not expired, etc. At step 1124, the telehealth platform can cause the user device to select a camera (e.g., a rear facing camera) to scan a voucher code. In some embodiments, the voucher code can be the same code as the machine-readable code. The voucher code can be used to ensure that a particular test kit is only used once. At step 1126, the telehealth platform can explain one or more next steps to a user. In some embodiments, a proctor can explain next steps. In some embodiments, explanations can be provided in the form of AR overlays, written instructions, images, videos, text, etc. At step 1128, the telehealth platform can instruct the user to cough. In some embodiments, a proctor can instruct the user to cough. In some embodiments, a video, audio, text, or other instruction can be provided to the user. In some embodiments, the proctor can listen for the user’s cough. In some embodiments, the user’s cough can be analyzed automatically by the telehealth platform. For example, a machine learning model can be a classifier model trained to classify cough audio, for example into categories such as normal, respiratory infection, smoker, etc.) At step 1130, the telehealth platform can instruct the user to place a thermometer in the user’s mouth. At step 1132, the user can place the thermometer in the user’s mouth. At step 1134, the user can wait for a predetermined period of time, for example a period of time sufficient for the temperature reading on the thermometer to stabilize. At step 1136, the proctor can record the temperature shown on the thermometer. In some embodiments, machine learning and/or computer vision algorithms can be configured to automatically read a temperature displayed on the thermometer. At step 1138, the proctor can wrap up the testing process, and the telehealth platform can provide a summary screen to the user. At step 1140, the telehealth platform can send data (e.g., cough data, the proctor’s notes, temperature data, etc.) to a clinician partner. At step 1142, the clinician can rate the performance of the telehealth platform (e.g., whether or not a diagnosis was correct, how the proctor and/or computer algorithms performed, etc.) At step 1144, the clinician can provide a consultation result after reviewing the received data. In some embodiments, the clinician may have a synchronous and/or asynchronous consultation with the user, for example using a video conference, text, email, phone call, etc. The clinician can then either prescribe treatment at step 1142 or not prescribe treatment at step 1144. In either case, the clinician may provide further instructions to the user, such as follow up care, re-testing, steps to take (e.g., rest, fluids, over the counter medications, etc.), and so forth.

Diagnostic Test Tracking and Detection

[0139] At-home diagnostic testing and screening can offer many benefits. However, at-home diagnostic testing and screening can introduce a higher risk of error (e.g., errors by users who are unfamiliar with testing procedures), cheating (e.g., a user who wants to travel may be motivated to falsify a test result for a test for an infectious disease, or a user seeking a medication may be motivated to falsify a positive test result), and so forth.

[0140] Conventional phone and video telehealth systems have insufficient and unreliable test validation capabilities. The inefficiencies can waste doctors’ and other professionals’ valuable time, lead to errors, and so forth. A telehealth platform with integrated AI/ML and/or computer vision algorithms for diagnostic test validation can increase the efficiency of the telehealth platform by automatically validating and/or generating a confidence indicator related to a user’s self-administered at-home diagnostic test. For example, by performing tasks such as tool initialization/detection, tool validation, tool tracking, tool application, and test action completion, the telehealth platform can validate an at-home diagnostic test. By automatically validating a diagnostic test, the telehealth platform can enable accurate testing to occur without requiring an excessive time commitment for medical professionals to perform diagnostic tests.

[0141] Telehealth platforms with integrated AI/ML and/or computer vision algorithms can, in some embodiments, automatically and dynamically generate a test confidence score based on the completion of a diagnostic test validation procedure. In some embodiments, the telehealth platform can reduce the time required for a medical professional to supervise and perform a diagnostic test on a user, reducing the time the medical professional spends performing a test and increasing the time the medical professional can focus on treating the user or attending to other users. For example, in some embodiments, a medical professional may conduct a more or less in-depth review of a diagnostic testing procedure based on the confidence score. For example, if a particular testing session received a low confidence score, the medical professional can review the testing procedure, test results, etc., more carefully to detect possible causes of the poor confidence score (e.g., the user did not follow steps correctly, the user attempted to manipulate the test to achieve a particular result, etc.).

[0142] In some embodiments, the systems, methods, and devices described herein may improve the efficiency and/or quality of experience associated with various telehealth services. The automation and streamlining of various testing processes may be desirable. For example, processes involved with digital diagnostics, cheat prevention (e.g., test chain of custody), and/or results validation may be improved by at least some level of automation. Being able to detect the occurrence of certain actions (e.g., user actions such as nasal swabbing, saliva collection, urine collection, etc., using a diagnostic test) using computer vision (CV) and/or AI/ML algorithms can be an important step in validating an at-home diagnostic test. In some embodiments, detecting certain actions in an at-home testing environment may be a key step in controlling the flow of automated sample collection processes and may facilitate efficient automated queuing. For example, in some embodiments, users can be prioritized based on their stage in a testing procedure, whether or not they appear to be struggling to complete steps of the testing procedure, whether they appear to be attempting to tamper with or otherwise subvert a testing procedure, and so forth.

[0143] In some embodiments, the systems, methods, and devices described herein may implement a sensor fusion approach to improve the confidence in at-home diagnostic testing. At-home diagnostic testing can include, for example, COVID-19 tests, pregnancy tests, UTI tests, STI tests, strep throat tests, flu tests, drug tests, and/or any other diagnostic test or screening. For example, in some embodiments, a telehealth platform may include a series of one or more tasks that can give confidence to an AI/ML proctor or guide (e.g., a physician or other qualified diagnostic test supervisor) that certain user action (e.g., nasal swabbing, saliva collection, urine collection, etc.) occurred, and that the action was performed correctly. In the example of a nasal swab, a correct action may involve placing at least a portion of the swab within a nostril, maintaining the swab within a nostril for a certain amount of time, reaching a certain depth in the nostril with the swab, having the correct orientation of the swab during the test, swirling and/or moving the swab around the nostril, and/or the like. In some embodiments, the one or more tasks and their component algorithms can overlap, such that, for example, a possible weakness (e.g., low confidence) in one task can be supplemented by a strength (e.g., high confidence) in another task, which may result in a higher overall confidence in the overall diagnostic test under a wide array of different conditions. In some embodiments, each of the one or more tasks may include a subset of additional tasks. For example, the subset of tasks may vary depending on, for example, the specific application of the diagnostic test (e.g., how important the accuracy of the test is), the specific diagnostic test used, the different screening or user action requirements, and/or the like. It is recognized that the while one example of a diagnostic test involving a nasal swabbing procedure is described, the systems, methods, and devices described herein may be applicable to a wide range of diagnostic testing procedures involving different actions. For example, the ability to identify a portion of a test (e.g., a testing tool such as a swab, test strip, saliva collection vessel, etc.) in an image and tracking the motion of at least a portion of the test relative to a user may have many applications in many different types of diagnostic testing. Further, the systems, methods, and devices described herein may have applicability in a wide range of medical uses and uses outside of the medical industry. In some cases, automatic detection, tracking, and so forth of a portion of a test can enable monitoring of a testing procedure while respecting user privacy. For example, a user taking an STI test or UTI test may be uncomfortable being watched by a live proctor when collecting a sample. In some embodiments, a sample collection process can be monitored using computer vision, machine learning, etc., such that the testing procedure can be verified without having a human being watch every step of the testing procedure.

[0144] In some embodiments, prior to performing a diagnostic test, a user may retrieve the specific diagnostic test and position themselves in front of a user device that is configured to record and/or view the testing procedure. For example, a user device may include a smartphone, tablet, laptop, desktop computer, and/or other type of personal device. In some embodiments, the telehealth platform can include a website, web application, mobile application, or any other software operating on the user device and/or another computing device. In some embodiments, the telehealth platform can provide a graphical user interface (GUI) that can be displayed on a user device. In some embodiments, the telehealth platform may periodically or continuously retrieve images, audio, and/or video of the user while the user performs the diagnostic test. In some embodiments, the telehealth platform may prompt the user to perform certain steps in the testing procedure. In some embodiments, the user may perform the steps of the testing procedure without assistance or prompts from the telehealth platform. In some embodiments, the user device can record one or more images, for analysis by the telehealth platform. In some embodiments, the system may use computer vision (“CV”) and/or AI/ML to analyze the one or more images and determine whether a diagnostic test was properly performed within a confidence threshold. It is recognized that there are other embodiments of the following method which may exclude some of the tasks described and/or include additional tasks not shown/described. Additionally, the tasks discussed may be combined, separated into sub-tasks, and/or rearranged to be completed in a different order and/or in parallel. A. Task One - Tool Detection and Validation

[0145] In some embodiments, the first task in a diagnostic test validation procedure may include one or more of: determining whether a user has a testing swab or other tool (such as a saliva collection vessel, UTI test strip, etc.). In detecting a grip location on the testing swab (e.g., the position where the user’s hand may hold the testing swab), and extracting an identification from the swab (e.g., a machine-readable code, QR-code, reference number, and/or the like) using one or more images of the user and/or the testing swab. In some embodiments, the grip location may be used as an input for future steps in the testing procedure. In some embodiments, identifying a testing swab may involve the system analyzing the shape, color, size, and/or the like of the testing swab. In some embodiments, the system may compare one or more images of the testing swab to a database containing images of testing swabs to identify the specific type of testing swab the user is using. In some embodiments, a user may input (e.g., via the user device) information regarding the specific diagnostic test. In some embodiments, the system uses the recognized swab as a starting location for future tracking steps.

[0146] In some embodiments, performing the tasks may include the system using one or more machine learning algorithms such as, for example, deep neural network (“DNN”) object recognition and bounding box generation and detection. For example, the system may generate (e.g., using DNN architectures) a bounding box that is configured to at least partially surround the testing swab in the one or more images. In some embodiments, the bounding box may refer to the border's coordinates that enclose an image of the testing swab. For example, the bounding box may be used to identify the testing swab and may serve as a reference point for object detection and/or create a collision box for the testing swab. In some embodiments, generating a bounding box may include the system generating a corresponding confidence interval for the bounding box. For example, the confidence interval may provide an indication of the likelihood that the bounding box contains the testing swab. For example, a low confidence interval would indicate that the system has low confidence that the bounding box contains the testing swab, and a high confidence interval would indicate that the system has a high confidence that the bounding box contains the testing swab. In some embodiments, the confidence interval may impact further tasks and/or analysis of the testing procedure. In some embodiments, a confidence interval can be impacted by factors such as lighting, contrast, and so forth. For example, there may be low confidence if lighting, contrast, or both are poor.

[0147] In some embodiments, performing the first task may include the system performing color tracking and initialization. For example, based on the determined location of the testing swab (e.g., within the generated bounding box), the system may perform a color analysis that may improve the confidence in the test swab identification (e.g., reduce the number of false test swab detections). In some embodiments, color analysis may include determining a color of a background image and/or a user’s hand and determining whether a different color (e.g., corresponding to a test swab) is present in the one or more images. In some embodiments, color analysis may be performed in all or a portion of the one or more images. For example, color analysis may only be performed within the coordinates corresponding to the bounding box.

[0148] In some embodiments, performing the first task may include the system extracting identifying information from the test swab. For example, some testing swabs may include manufactured or post-manufacturing identification methods. Identification methods can include, one or more fiducials positioned on the testing swab, such as, for example, color fiducials (e.g., colored swab tips, colored swab bodies, and/or the like), visual fiducials such as stripes, machine readable codes including QR codes and barcodes, computer vision fiducials, custom identification fiducials, and/or the like. In some embodiments, testing swabs with identification methods may provide one or more benefits such as, for example, providing the system with validation that the testing swab is accurate for the desired diagnostic test, providing the system with trackable points of interest, robustification of the color analysis (e.g., testing swabs with identification methods such as unique color may provide for a more accurate color analysis, particularly against neutral colored backgrounds), allowing the system to assign or receive a unique identifier corresponding to a particular user’s test swab, which may improve sample tracking and further improve validation of the diagnostic test.

[0149] In some embodiments, performing the first task may include the system implementing one or more feature extraction techniques such as, for example applying a Hough transform (i.e., Hough lines). In some embodiments, prior to applying a Hough transform, the system may apply one or more pre-processing steps such as, for example, edge detection. In some embodiments, performing the first task may include the system implementing CV contour detection. The above steps (e.g., feature extraction and/or contour detection) may improve the system’s ability to determine a starting orientation for the testing swab, which may improve the system’s ability to validate a diagnostic test. Further, these steps may also provide an additional confirmation to the system that the test swab is within the one or more images and/or within the generated bounding box.

[0150] In some embodiments, after completing the first task and/or associated subtasks, the system may have initialized the test swab (e.g., identified the testing swab in the one or more images and/or registered the testing swab based on the identification method) and may have registered that the test swab is spatially located within or near the user’s hand. In some embodiments, the second task in a diagnostic test validation procedure may allow the system to track the testing swab in the one or more images as it moves through the testing space.

B. Task Two - Tool Tracking

[0151] In some embodiments, the second task in the diagnostic test validation procedure may include the system tracking the user’s hand through space. For example, in conjunction with the initialized model of the testing swab registered to the user’s hand, the system may track the testing swab through space using one or more hand tracking algorithms. Tracking a user’s hand may allow the system to determine where the testing swab is relative to another portion of the user, such as the user’s nostrils as described further herein.

[0152] In some embodiments, performing the second task may include the system tracking other portions of the user’s body such as, for example, the user’s arms, face, nostrils, torso and/or the like. For example, the system may implement face tracking algorithms on the one or more images to detect different features of the user’s face (e.g., using face landmark detection) to identify different facial features such as, for example, a user’s eyes, ears, mouth, nostrils, throat, and/or the like. The different features identified may vary depending on the type of diagnostic test being used. For example, a user’s mouth may be a feature of interest for a diagnostic test including an oral testing swab, while a user’s nostril may be a feature of interest for a diagnostic test including a nasal testing swab. In some embodiments, the system may use the detected target feature’s location (e.g., nostril location) and the previous tracked points (e.g., the user’s hand and testing swab) to create a series of corresponding vectors for use in future steps.

[0153] In some embodiments, performing the second task may include the system implementing one or more feature extraction techniques such as, for example applying a Hough transform (e.g., Hough lines). In some embodiments, prior to applying a Hough transform, the system may apply one or more pre-processing steps such as, for example, edge detection. In some embodiments, performing the first task may include the system implementing CV contour detection. The above steps (e.g., feature extraction and/or contour detection) may allow the system to track the dominant line of the swab as the testing swab moves through space (e.g., via user movement) which may allow the system to continually estimate the testing swab’s orientation.

[0154] In some embodiments, performing the second task may include the system implementing one or more noise/motion filtering algorithms. For example, the system may use predictive modeling of a user’s hand and head motion in a filter to provide smoother tracking of the testing swab despite, for example, the system dropping any spurious frames. Use of noise/motion filtering algorithms may increase the tracking accuracy of the system. In some embodiments, the system may use a Kalman filter or another recursive algorithm for continuous filtering.

[0155] Depending on the type of diagnostic test and testing swab used, in some embodiments, the system may increase the tracking accuracy by tracking testing swab fiducials. As described above, some testing swabs include manufactured or postmanufacturing fiducials. For example, different fiducials may allow for 2, 3, 6, and/or the like degree of freedom trackable features. When present, the system may track the fiducials to, for example, further refine the tracking steps described above, which may increase the system’s confidence in the identified location of the testing swab relative to the user (e.g., user’s hand) and the motion of the testing swab. [0156] While described with respect to a test swab, the test tool can be any test tool, such as a test strip, saliva collection vessel, etc., and the location of the test tool can be tracked with respect to any body part of interest, such as a hand, face, nostril, mouth, genitalia, etc.

C. Task Three - Tool Application

[0157] In some embodiments, prior to completing the third task, the system will have initialized, registered, and begun tracking the testing swab through completion of one or more of the steps described above. In some embodiments, the third task in the diagnostic test validation procedure involves the system identifying intersections between the testing swab and the feature of interest (e.g., nasal cavity). In embodiments where the testing swab must be inserted into a cavity (e.g., a nostril or mouth) of the user, the system may also estimate the insertion depth of the testing swab relative to the user’s cavity.

[0158] In some embodiments, performing the third task may include the system continuing to implement user tracking algorithms (e.g., tracking the user’s hand, body, face, and/or the like). For example, if the system knows where the user’s grip is registered relative to the testing swab and the system knows the length and orientation of the testing swab, the system can calculate the vector between the user’s hand and nostril cavity (or other cavity, such as the user’s mouth). Using this information, the system can infer whether insertion of the testing swab into the cavity has occurred, and the system may estimate the depth of insertion.

[0159] In some embodiments, performing the third task may include the system inline pixel counting. For example, where the system determined the Hough Line/contour corresponding to the testing swab as described above, the system may determine (e.g., count) the number of pixels that form the swab in the one or more images along that line. Using the number of visible testing swab pixels in subsequent images, the system can determine whether the number of visible testing swab pixels is decreasing as the testing swab is inserted in the user’s nostril. In some embodiments, where the system has high confidence in the swab registration and tracking, the system may also determine an insertion depth of the testing swab based in part on the number of testing swab pixels present in one or more images while the testing swab is inserted in the user’s nostrils.

[0160] In some embodiments, performing the third task may include the system color clustering for dominant color extraction. For example, the system may detect a target user feature (e.g., the user’s nostril) within the one or more images and may generate a rectangle of pixels that surround the target feature. In some embodiments, pixels that are within the rectangle may be white balanced and/or color corrected to a known standard. In some embodiments, each pixel within the rectangle may then be projected into three-dimensional space based on their R, G, and B coordinates and a clustering algorithm may be applied by the system. For example, the clustering algorithm (e.g., k-means) may be used to group a portion of the image into dominant color clusters, where each color cluster is represented by the mean color of the cluster. In some embodiments, the three-color clusters will indicate whether a testing swab is present around the feature of interest. For example, during the swabbing process, when the testing swab is not present, the three-color clusters may lock into the user’s dominant skin color, secondary skin color, and tertiary skin color. However, when the testing swab is present around the feature of interest (e.g., nostril), the three-color clusters may lock into the user’s dominant skin color, secondary skin color, and testing swab color. Therefore, the third cluster may be the tertiary skin color when the testing swab is not present and may be the testing swab color when the testing swab is present. By identifying the first two color clusters (e.g., dominant skin color and secondary skin color), the system may use the color of the third cluster to determine whether the swab is in the tracked feature of interest region (e.g., tracked nostril region).

[0161] Depending on the type of diagnostic test and testing swab used, in some embodiments, the system may increase the accuracy of the depth of insertion determination by tracking testing swab fiducials. As described above, some testing swabs include manufactured or post-manufacturing fiducials. For example, where the system is tracking one or more testing swab fiducials, the system can determine the insertion depth using, for example image analysis, to determine whether certain testing swab fiducials are present in one or more images of the user completing the swabbing portion of the diagnostic test. For example, if the testing swab includes a color fiducial (e.g., a colored tip) on the top portion (e.g., 0.25 inches, 0.5 inches, 0.75 inches, 1 inch, and/or the like) of a white colored testing swab, the system may determine that the top portion of the testing swab has been inserted in the user’s nostril once the color corresponding to the colored tip is no longer present in the one or more images. Similarly, where all or of a portion of the color corresponding to the colored tip is continuously present in the one or more images, the system may determine that sufficient swabbing depth was not achieved by the user.

D. Task Four - Test Action Completion

[0162] In some embodiments, the fourth task in the diagnostic test validation procedure involves the system determining whether sufficient user action occurred to complete the sample collection process. For example, where a user is completing a diagnostic test including a nasal swab, user actions may include achieving a certain depth with the testing swab, swirling the testing swab within the nasal cavity, keeping the testing swab within the nasal cavity for a certain amount of time, completing a swirling action for a certain amount of time, and/or the like. In the case of an oral swab, the system can determine if the user kept the swab in the user’s mouth for a predetermined period of time, swabbed the cheek or throat a predetermined number of times, etc. In the case of a UTI or STI test, the system may determine if the user captured a urine sample at a correct time (e.g., whether or not the user collected a “clean catch”).

[0163] In some embodiments, completing the fourth task may include the system continuing the track the testing swab (e.g., as described with respect to at least Task Two). For example, as described above, certain algorithms may be used to track the testing swab through space in the one or more images. In some embodiments, the system will continuously implement the tracking algorithms to determine whether sufficient motion of the testing swab relative to the nostril is detected. For example, the system may determine motion has occurred based on the pixel motion of the visible portions of the testing swab and/or the user’s hand.

[0164] In some embodiments, completing the fourth task may include the system determining whether the testing swab is inserted in the user’s nostril for a sufficient amount of time (e.g., the minimum amount of time associated with proper sample collection based on the specific diagnostic test). For example, using one or more of the algorithmic methods described in at least Task Three, the system can ensure that the testing swab in inserted at a proper depth in the user’s nostril for the minimum amount of time by ensuring proper depth in each subsequent image. In some embodiments, the system may determine the user has completed the sample collection process when the system determines there has been sufficient motion of the testing swab and/or insertion continuity for the required amount of time. For example, five seconds of motion at a continuous insertion level may be sufficient for some diagnostic tests.

[0165] In some embodiments, after completing all or a portion of the four tasks described herein, the system can generate a score or confidence indicator regarding the system’s confidence the user sufficiently completed the diagnostic test. The confidence indicator can be used to completely or partially validate a user’s at-home diagnostic test. For example, the confidence score may indicate that the test was likely validly performed and therefore likely accurate, or that the test was likely not validly performed and therefore likely not accurate.

[0166] Figures 12A-12B illustrate examples of test tool tracking according to some embodiments. In Figure 12A, a test swab 1200 includes fiducials 1204a-1204c. Computer vision and/or machine learning algorithms can detect the test swab 1200 and can determine a bounding box 1202 that encompasses the test swab 1200. In some embodiments, a system can detect other features in an image, such as a nostril 1206 of a user. In Figure 12B, the test swab 1200 has been inserted into the nostril 1206 of the user. The system can redetermine the bounding box 1202 to encompass an exposed portion of the test swab 1200. The system can track the movements of the test swab 1200 and. /or the movements of the fiducials 1204a-1204c. The system can determine a relative position of the test swab 1200 and/or one or more of the fiducials 1204a-1204c relative to the nostril 1206, for example to determine if the test swab has been inserted into the nostril, if the test swab has been inserted a sufficient depth into the nostril, if the user has performed one or more actions such as twisted or tilting the test swab, and so forth. In some embodiments, rotational movements can be monitored more easily if the fiducials are not constant around the circumference of the test swab (e.g., not a uniformly thick line). For example, if the line thickness of a fiducial varies around the circumference of the test swab, the variation in thickness can be used to determine a rotation amount of the test swab. In some embodiments, additionally or alternatively, a color, pattern, or other feature of a fiducial can be used to determine rotation of the test swab and/or to make it easier for a machine learning or CV algorithm to detect a movement of the test swab and/or to detect the test swab and/or the fiducial.

Secure and Verified Medical Information Sharing

[0167] Often, users have a need to share information between medical providers or between a medical provider and another organization. In the physical world, a receptionist or other staff at a medical office can provide a user with a consent form. The staff member can confirm information such as the user’s date of birth, name, and so forth. In some cases, the staff member may review a copy of the user’s identification.

[0168] After the user’s identity is verified and consent is provided, information can be shared with another entity such as a medical office, employer, and so forth. In the case of sharing between clinical data systems, data can be shared using standards such as HL7. In some cases, information can be shared via physical mail, email, cloud storage, fax, phone, or other means.

[0169] Traditional approaches have significant limitations. For example, verifying a user’s identity can be prone to error as anyone with the user’s basic demographic information may be able to impersonate an individual, as the user may only be asked to confirm basic information such as name, date of birth, or other information that can be readily ascertained. Identity verification issues can be particularly pronounced if identification documents are not checked. In some cases, medical providers may have sound processes in place for sharing information with other medical providers but may not have such processes in place for sharing information with other parties such as employers, government agencies, and so forth.

[0170] In some cases, when medical information is shared, personal identifying information (PH) may also be shared, such as the user’s name, date of birth, address, and so forth. This can create a significant risk if the information is intercepted by an unauthorized party, inadvertently sent to the wrong destination, and so forth. It can be significant to eliminate or reduce the likelihood of medical information that includes PH from being access by parties other than the intended recipients of the medical information.

[0171] In some implementations, PH may not be shared when sharing medical information. For example, to ensure compliance with HIPAA and/or other regulations, a system may not include PH when sharing medical information. Some conventional approaches rely on a trust model in which it is assumed that a user wishes to share accurate medical information. However, in some cases, a user may wish to share inaccurate information. For example, a traveler who tests positive for COVID-19 may want to rely on a negative COVID test result in order to travel, or an employee subject to drug testing may wish to fraudulently supply a negative drug test result to an employer.

[0172] In some embodiments symmetric key encryption and/or asymmetric key encryption can be used to share information between a first party or application and a second party or application. However, such approaches can require more coordination between the first party or application and the second party or application, for example to share a symmetric and/or to share a public key of a first party that can be used to encrypt information to be sent to the second party or application and decrypted using a private key of the second party or application.

[0173] In some conventional approaches to sharing medical information, a user can log in to a web portal, application, etc. of a first organization (e.g., a medical facility, employer, health pass provider, etc. to whom the user would like to share medical information). In some cases, the user can select an option to import test results or other medical information from a second organization (e.g., from a lab, another medical provider, etc.). The user can be directed to a sign in page for the second organization. If the user wishes to transfer their own medical information (e.g., their own test results), the user can sign in with their own login information. However, if a user wants to transfer results from someone else (e.g., from a friend who tested negative for an infectious disease, testing negative for a drug test, etc.), the user can use the friend’s login information (or the friend can log in for the user). The user (or the friend) can then give permission to share medical information with the first organization. The second organization can then transfer medical information to the first organization. The medical information can be transferred to the first organization without any PH. For example, in the case of a test result, the transferred medical information can include a test identifier, a test date, a test result, etc., but may not include any information indicating who the test is associated with. Thus, for example, the friend’s test result can be transferred to the first organization as if it is the user’s own test result. Thus, there is a need for systems and methods that can ensure the secure, verified transfer of medical information to ensure that when a user elects to share information between organizations, only the user’s own information is shared, thereby preserving the integrity of medical information, and reducing or eliminating fraud.

[0174] Some embodiments herein are directed to systems and methods for transmitting medical information about a user from a first system or application to a second system or application. Some embodiments herein can enable the second system or application to verify that the medical information received from the first system corresponds to the user. In some embodiments, the medical information may not include PH. In some embodiments, the medical information can include a user identifier that can be verified by the second system.

[0175] A system for generating user identifiers that do not include PH can allow the second system to verify the medical information sent from the first system is the user’s medical information. In some embodiments, the system can include a cryptographic hash algorithm or any other user identifier generator. In some embodiments, a first system and a second system can each generate a user identifier associated with a user. The first system can transmit the user identifier generated by the first system when the first system transmits medical information to the second system. The second system can compare the user identifier received from the first system to a user identifier generated by the second system to confirm an identity of the user.

[0176] Figure 13 is a flowchart the illustrates an example process for sharing information according to some embodiments. The process illustrated in Figure 13 can be carried out on a computing system. [0177] At step 1302, the system can prompt a user to provide permission for data sharing from a third-party source. In some embodiments, this may be implemented by, for example, directing the user to a site controlled by the third-party source, where the user can confirm directly with the third-party source that information can be shared. At step 1304, if permission for sharing was denied, the process can stop. In some embodiments, the system can provide a notification to the user that sharing permissions were denied. If permission was received, the system can receive information from the third-party source at step 1306. The received information can include a user identifier. At step 1308, the system can determine a user identifier. In some cases, a user identifier may have been previously determined and step 1308 can include retrieving the previously determined user identifier. For example, the system may have generated a user identifier when the user signed up for an account. At step 1310, the system can compare the determined user identifier and the received user identifier from the third-party source. At step 1312, if the determined user identifier and the received user identifier do not match, the process can stop. In some embodiments, the system can provide a notification to the user that that user’s identity could not be verified. In some embodiments, the system can provide guidance for resolving the issue. For example, common reasons user identifiers may not match can include name misspellings, inconsistent naming (e.g., use of prefixes or suffixes, middle initial or middle name), inconsistent addresses (e.g., the third-party source has a different address on file than the service or provider that is receiving the information), inconsistent phone numbers, inconsistent e-mail addresses, and so forth. If, at step 1312, the received user identifier and the determined user identifier match, the system can associate the received information with the user at step 1314. After the information is associated with the user, the information can be used in further operations carried out by the system.

[0178] Figure 14 illustrates an example data sharing process according to some embodiments. A user can receive a request on display 1402 of a user device 1400 to provide permission to share data with a third party. If the user selects to share information, at step 1406, the third party can receive the data 1404 from an other party. The received data 1404 can include a user identifier that can be a hash of verified user information such as an email address of the user. If the received user identifier matches an identifier generated by the third party, the user can be presented with a success display 1408. If the identifiers do not match, the user can receive an error message 1410 explaining that the third party was unable to verify the identity.

[0179] Figure 15 is a flowchart that illustrates an example identity verification process according to some embodiments. At step 1502, a first system can hash a first verified identifier 1504 (e.g., an email address, phone number, social security number, etc.) of a user. At step 1506, a second system can hash a second verified identifier 1508 (e.g., an email address, phone number, social security number, etc.) of the user. At step 1510, the receiving party (which can be the first party or the second party) can compare the first hash and the second hash. In some embodiments, multiple identifiers can be hashed together or separately (e.g., last name and email address). If the hashes match, the receiving party can determine that the user of the first system is the same as the user of the second system. If the hashes do not mash, the receiving party can determine that the user of the first system is different from the user of the second system.

Computer Systems

[0180] Figure 16 is a block diagram depicting an embodiment of a computer hardware system configured to run software for implementing one or more embodiments disclosed herein.

[0181] In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated in Figure 16. The example computer system 1602 is in communication with one or more computing systems 1620 and/or one or more data sources 1622 via one or more networks 1618. While Figure 16 illustrates an embodiment of a computing system 1602, it is recognized that the functionality provided for in the components and modules of computer system 1602 may be combined into fewer components and modules, or further separated into additional components and modules.

[0182] The computer system 1602 can comprise a module 1614 that carries out the functions, methods, acts, and/or processes described herein. The module 1614 is executed on the computer system 1602 by a central processing unit 1606 discussed further below. [0183] In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C or C++, Python, or the like. Software modules may be compiled or linked into an executable program, installed in a dynamic link library, or may be written in an interpreted language such as BASIC, PERL, LUA, or Python. Software modules may be called from other modules or from themselves, and/or may be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or may include programmable units, such as programmable gate arrays or processors.

[0184] Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems and may be stored on or within any suitable computer readable medium or implemented in-whole or inpart within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses may be facilitated through the use of computers. Further, in some embodiments, process blocks described herein may be altered, rearranged, combined, and/or omitted.

[0185] The computer system 1602 includes one or more processing units (CPU) 1606, which may comprise a microprocessor. The computer system 1602 further includes a physical memory 1610, such as random-access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device 1604, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device may be implemented in an array of servers. Typically, the components of the computer system 1602 are connected to the computer using a standards-based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. [0186] The computer system 1602 includes one or more input/output (I/O) devices and interfaces 1612, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces 1612 can include one or more display devices, such as a monitor, which allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces 1612 can also provide a communications interface to various external devices. The computer system 1602 may comprise one or more multimedia devices 1608, such as speakers, video cards, graphics accelerators, and microphones, for example.

[0187] The computer system 1602 may run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system 1602 may run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system 1602 is generally controlled and coordinated by an operating system software, such as Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows 11 , Windows Server, Unix, Linux (and its variants such as Debian, Linux Mint, Fedora, and Red Hat), SunOS, Solaris, Blackberry OS, z/OS, iOS, macOS, or other operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things.

[0188] The computer system 1602 illustrated in Figure 16 is coupled to a network 1618, such as a LAN, WAN, or the Internet via a communication link 1616 (wired, wireless, or a combination thereof). Network 1618 communicates with various computing devices and/or other electronic devices. Network 1618 is communicating with one or more computing systems 1620 and one or more data sources 1622. The module 1614 may access or may be accessed by computing systems 1620 and/or data sources 1622 through a web-enabled user access point. Connections may be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point may comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 1618.

[0189] Access to the module 1614 of the computer system 1602 by computing systems 1620 and/or by data sources 1622 may be through a web-enabled user access point such as the computing systems’ 1620 or data source’s 1622 personal computer, cellular phone, smartphone, laptop, tablet computer, e-reader device, audio player, or another device capable of connecting to the network 1618. Such a device may have a browser module that is implemented as a module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network 1618.

[0190] The output module may be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module may be implemented to communicate with input devices 1612 and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module may communicate with a set of input and output devices to receive signals from the user.

[0191] The input device(s) may comprise a keyboard, roller ball, pen and stylus, mouse, trackball, voice recognition system, or pre-designated switches or buttons. The output device(s) may comprise a speaker, a display screen, a printer, or a voice synthesizer. In addition, a touch screen may act as a hybrid input/output device. In another embodiment, a user may interact with the system more directly such as through a system terminal connected to the score generator without communications over the Internet, a WAN, or LAN, or similar network.

[0192] In some embodiments, the system 1602 may comprise a physical or logical connection established between a remote microprocessor and a mainframe host computer for the express purpose of uploading, downloading, or viewing interactive data and databases on-line in real time. The remote microprocessor may be operated by an entity operating the computer system 1602, including the client server systems or the main server system, an/or may be operated by one or more of the data sources 1622 and/or one or more of the computing systems 1620. In some embodiments, terminal emulation software may be used on the microprocessor for participating in the micro-mainframe link.

[0193] In some embodiments, computing systems 1620 who are internal to an entity operating the computer system 1602 may access the module 1614 internally as an application or process run by the CPU 1606.

[0194] In some embodiments, one or more features of the systems, methods, and devices described herein can utilize a URL and/or cookies, for example for storing and/or transmitting data or user information. A Uniform Resource Locator (URL) can include a web address and/or a reference to a web resource that is stored on a database and/or a server. The URL can specify the location of the resource on a computer and/or a computer network. The URL can include a mechanism to retrieve the network resource. The source of the network resource can receive a URL, identify the location of the web resource, and transmit the web resource back to the requestor. A URL can be converted to an IP address, and a Domain Name System (DNS) can look up the URL and its corresponding IP address. URLs can be references to web pages, file transfers, emails, database accesses, and other applications. The URLs can include a sequence of characters that identify a path, domain name, a file extension, a host name, a query, a fragment, scheme, a protocol identifier, a port number, a username, a password, a flag, an object, a resource name and/or the like. The systems disclosed herein can generate, receive, transmit, apply, parse, serialize, render, and/or perform an action on a URL.

[0195] A cookie, also referred to as an HTTP cookie, a web cookie, an internet cookie, and a browser cookie, can include data sent from a website and/or stored on a user’s computer. This data can be stored by a user’s web browser while the user is browsing. The cookies can include useful information for websites to remember prior browsing information, such as a shopping cart on an online store, clicking of buttons, login information, and/or records of web pages or network resources visited in the past. Cookies can also include information that the user enters, such as names, addresses, passwords, credit card information, etc. Cookies can also perform computer functions. For example, authentication cookies can be used by applications (for example, a web browser) to identify whether the user is already logged in (for example, to a web site). The cookie data can be encrypted to provide security for the consumer. Tracking cookies can be used to compile historical browsing histories of individuals. Systems disclosed herein can generate and use cookies to access data of an individual. Systems can also generate and use JSON web tokens to store authenticity information, HTTP authentication as authentication protocols, IP addresses to track session or identity information, URLs, and the like.

[0196] The computing system 1602 may include one or more internal and/or external data sources (for example, data sources 1622). In some embodiments, one or more of the data repositories and the data sources described above may be implemented using a relational database, such as Sybase, Oracle, CodeBase, DB2, PostgreSQL, and Microsoft® SQL Server as well as other types of databases such as, for example, a NoSQL database (for example, Couchbase, Cassandra, or MongoDB), a flat file database, an entityrelationship database, an object-oriented database (for example, InterSystems Cache), a cloud-based database (for example, Amazon RDS, Azure SQL, Microsoft Cosmos DB, Azure Database for MySQL, Azure Database for MariaDB, Azure Cache for Redis, Azure Managed Instance for Apache Cassandra, Google Bare Metal Solution for Oracle on Google Cloud, Google Cloud SQL, Google Cloud Spanner, Google Cloud Big Table, Google Firestore, Google Firebase Realtime Database, Google Memorystore, Google MongoDB Atlas, Amazon Aurora, Amazon DynamoDB, Amazon Redshift, Amazon ElastiCache, Amazon MemoryDB for Redis, Amazon DocumentDB, Amazon Keyspaces, Amazon Neptune, Amazon Timestream, or Amazon QLDB), a non-relational database, or a recordbased database.

[0197] The computer system 1602 may also access one or more databases 1622. The databases 1622 may be stored in a database or data repository. The computer system 1602 may access the one or more databases 1622 through a network 1618 or may directly access the database or data repository through I/O devices and interfaces 1612. The data repository storing the one or more databases 1622 may reside within the computer system 1602. Additional Embodiments

[0198] In the foregoing specification, the systems and processes have been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the embodiments disclosed herein. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.

[0199] Indeed, although the systems and processes have been disclosed in the context of certain embodiments and examples, it will be understood by those skilled in the art that the various embodiments of the systems and processes extend beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the systems and processes and obvious modifications and equivalents thereof. In addition, while several variations of the embodiments of the systems and processes have been shown and described in detail, other modifications, which are within the scope of this disclosure, will be readily apparent to those of skill in the art based upon this disclosure. It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the embodiments may be made and still fall within the scope of the disclosure. It should be understood that various features and aspects of the disclosed embodiments can be combined with, or substituted for, one another in order to form varying modes of the embodiments of the disclosed systems and processes. Any methods disclosed herein need not be performed in the order recited. Thus, it is intended that the scope of the systems and processes herein disclosed should not be limited by the particular embodiments described above.

[0200] It will be appreciated that the systems and methods of the disclosure each have several innovative aspects, no single one of which is solely responsible or required for the desirable attributes disclosed herein. The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure.

[0201] Certain features that are described in this specification in the context of separate embodiments also may be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment also may be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. No single feature or group of features is necessary or indispensable to each and every embodiment.

[0202] It will also be appreciated that conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “for example,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open- ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. In addition, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. In addition, the articles “a,” “an,” and “the” as used in this application and the appended claims are to be construed to mean “one or more” or “at least one” unless specified otherwise. Similarly, while operations may be depicted in the drawings in a particular order, it is to be recognized that such operations need not be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one or more example processes in the form of a flowchart. However, other operations that are not depicted may be incorporated in the example methods and processes that are schematically illustrated. For example, one or more additional operations may be performed before, after, simultaneously, or between any of the illustrated operations. Additionally, the operations may be rearranged or reordered in other embodiments. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results.

[0203] Further, while the methods and devices described herein may be susceptible to various modifications and alternative forms, specific examples thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the embodiments are not to be limited to the particular forms or methods disclosed, but, to the contrary, the embodiments are to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the various implementations described and the appended claims. Further, the disclosure herein of any particular feature, aspect, method, property, characteristic, quality, attribute, element, or the like in connection with an implementation or embodiment can be used in all other implementations or embodiments set forth herein. Any methods disclosed herein need not be performed in the order recited. The methods disclosed herein may include certain actions taken by a practitioner; however, the methods can also include any third-party instruction of those actions, either expressly or by implication. The ranges disclosed herein also encompass any and all overlap, sub-ranges, and combinations thereof. Language such as “up to,” “at least,” “greater than,” “less than,” “between,” and the like includes the number recited. Numbers preceded by a term such as “about” or “approximately” include the recited numbers and should be interpreted based on the circumstances (for example, as accurate as reasonably possible under the circumstances, for example ±5%, ±10%, ±15%, etc.). For example, “about 3.5 mm” includes “3.5 mm.” Phrases preceded by a term such as “substantially” include the recited phrase and should be interpreted based on the circumstances (for example, as much as reasonably possible under the circumstances). For example, “substantially constant” includes “constant.” Unless stated otherwise, all measurements are at standard conditions including temperature and pressure. [0204] As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: A, B, or C” is intended to cover: A, B, C, A and B, A and C, B and C, and A, B, and C. Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be at least one of X, Y or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present. The headings provided herein, if any, are for convenience only and do not necessarily affect the scope or meaning of the devices and methods disclosed herein.

Accordingly, the claims are not intended to be limited to the embodiments shown herein but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein.

[0205] From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims.