Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SUPERVISED MACHINE LEARNING SYSTEM FOR OPTIMISING OUTPATIENT CLINIC ATTENDANCE
Document Type and Number:
WIPO Patent Application WO/2018/058189
Kind Code:
A1
Abstract:
There is provided herein a supervised machine learning module for optimising outpatient clinic attendance accordingly that dynamically overbooks a clinic schedule according to the patient specific and clinic specific parameters to optimise outpatient clinic attendance. The system comprises a trained machine module. The trained machine module is configured for having as input patient specific data and clinic specific data and calculating an attendance failure probability accordingly. The system further comprises a machine learning module configured for training the trained machine module. The machine learning module trains the trained machine module using historical training data comprising patient specific training data representing a plurality of patients, clinic specific training data representing a plurality of clinics and attendance training data representing attendance by the plurality of patients for each of the historical clinics. Once the trained machine has been optimised in this way, in use, for a plurality of future clinics, the trained machine is configured for calculating an attendance probability (or probabilities) for the future clinics. Then, the future clinics are overbooked by a number of patients according to the calculated attendance failure probabilities to generate attendance probability optimised future clinics.

Inventors:
LAWRIE JOCK (AU)
Application Number:
PCT/AU2017/051061
Publication Date:
April 05, 2018
Filing Date:
September 28, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HRO HOLDINGS PTY LTD (AU)
International Classes:
G06Q10/04; G06F15/18
Foreign References:
US20150242819A12015-08-27
US20160253462A12016-09-01
Other References:
ALAEDDINI ET AL.: "A probabilistic model for predicting the probability of no-show in hospital appointments", HEALTH CARE MANAGEMENT SCIENCE, vol. 14, no. 2, 1 February 2011 (2011-02-01), pages 146 - 157, XP019896569
ALAEDDINI ET AL.: "A hybrid prediction model for noshows and cancellations of outpatient appointments", IIE TRANSACTIONS ON HEALTHCARE SYSTEMS ENGINEERING, vol. 5, no. 1, 16 March 2015 (2015-03-16), pages 14 - 32, XP055494955
HUANG ET AL.: "Using Artificial Neural Networks to Establish a Customer-cancellation Prediction Model", PRZEGLAD ELEKTROTECHNICZNY, vol. 89, no. 1b, 2013, pages 178 - 180, XP055494959
SAMORANI ET AL.: "Outpatient appointment scheduling given individual day-dependent no-show predictions", EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, vol. 240, no. 1, 2015, pages 245 - 257, XP055268703
Attorney, Agent or Firm:
PATENTEC PATENT ATTORNEYS (AU)
Download PDF:
Claims:
Claims

1. A supervised machine learning system for optimising outpatient clinic attendance, the system comprising:

trained machine module configured for having as input patient specific data and clinic specific data for a plurality of clinics for calculating an attendance failure probability according to the input patient specific data and clinic specific data;

a machine learning module configured for training the trained machine module, wherein the machine learning module trains the trained machine module using historical training data comprising patient specific training data representing a plurality of patients; clinic specific training data representing a plurality of clinics and attendance training data representing attendance by the plurality of patients of the respective historical clinics, and wherein the machine learning module is configured for optimising the accuracy of the attendance failure probability calculation of the trained machine; and

a clinic schedule module for scheduling clinics, wherein, in use:

for a future clinic schedule comprising patient specific data and clinic specific data, the trained machine is configured for calculating an attendance failure probability for the future clinic schedule;

the clinic schedule module is configured for overbooking the future clinic schedule by a number of patients calculated in accordance with the attendance failure probability to generate an attendance probability optimised clinic schedule.

2. A system as claimed in claim 1, wherein the system is further configured for identifying a patient for overbooking for the attendance probability optimised clinic schedule.

3. A system as claimed in claim 2, wherein identifying the patient comprise identifying the patient in accordance with wait time data for the patient.

4. A system as claimed in claim 2, wherein identifying the patient comprise identifying the patient in accordance with an attendance failure probability for the patient determined by the trained machine module.

5. A system as claimed in claim 4, wherein the patient is identified according to a difference of the attendance failure probability for the patient and the attendance failure probability for the future clinic schedule.

6. A system as claimed in claim 1, wherein the machine learning module is configured for calculating the attendance failure probability for a particular time period of the future clinic schedule.

7. A system as claimed in claim 1, wherein the trained machine module comprises an artificial neural network.

8. A system as claimed in claim 7, wherein the artificial neural network comprises input nodes for patient and clinic specific data and at least one output node for the attendance failure probability calculation.

9. A system as claimed in claim 8, wherein neural network comprises at least one hidden layer between the input nodes and the output node.

10. A system as claimed in claim 7, wherein the machine learning module is configured for adjusting weightings of the artificial neural network.

11. A system as claimed in claim 7, wherein the machine learning module is configured for adjusting the architecture of the artificial neural network.

12. A system as claimed in claim 11, wherein adjusting architecture comprises adjusting at least one of the number of neurons and number of hidden layers.

13. A system as claimed in claim 1, wherein the patient specific data comprises patient demographic data.

14. A system as claimed in claim 13, wherein the patient in a graphic data comprises demographic data including at least one of age, gender and residential address.

15. A system as claimed in claim 13, wherein the patient specific data further comprises health- related data.

16. A system as claimed in claim 15, wherein the health-related data comprises at least one of smoking status, pregnancy status, diabetes status and diagnosis.

17. A system as claimed in claim 1, wherein the clinic specific data comprises at least one of clinical speciality data, health practitioner specific data, and date time data.

18. A system as claimed in claim 1, wherein the calculated attendance failure probability is patient specific.

19. A system as claimed in claim 18, wherein the machine learning module is configured for calculating an attendance failure probability for each patient of the future clinic schedule.

20. A system as claimed in claim 1, wherein the calculated attendance failure probability is clinic schedule specific.

21. A system as claimed in claim 20, wherein the machine learning module is configured for calculating an attendance failure probability for the entire clinic schedule.

22. A system as claimed in claim 21, wherein the attendance failure probability represents an attendance failure probability distribution.

23. A system as claimed in claim 22, wherein calculating an attendance failure probability for the entire clinic schedule comprises calculating an attendance failure probability for each scheduled patient of the future clinic schedule and combining the attendance failure probabilities to calculate the attendance failure probability distribution.

24. A system as claimed in claim 23, wherein the attendance failure probability for the entire clinic schedule is calculated using the number of patients scheduled and the attendance failure probability distribution.

25. A system as claimed in claim 23, wherein the attendance probability optimised clinic schedule is fed back as input into the machine learning module.

Description:
A supervised machine learning system for optimising outpatient clinic attendance

Field of the Invention

[I] The present invention relates to a supervised machine learning system for optimising outpatient clinic attendance.

Background

[2] Outpatient clinic failure to attend (FTA) rates are problematic with an estimated 10 to 20% of patients failing to attend scheduled outpatient clinic clinics. Such high FTA rates not only waste professional time and resources but also increase patient wait times.

[3] Prior art solutions to reduce FTA rates, such as automated clinic confirmation messaging systems and the like have failed to address such high FTA rates.

[4] The present invention seeks to provide a way to optimising outpatient clinic attendance, which will overcome or substantially ameliorate at least some of the deficiencies of the prior art, or to at least provide an alternative.

[5] It is to be understood that, if any prior art information is referred to herein, such reference does not constitute an admission that the information forms part of the common general knowledge in the art, in Australia or any other country.

Summary of the Disclosure

[6] We found that it is difficult to set a constant overbooking rate for a clinic schedule to account for FTA rates because we found that FTA rates vary over time.

[7] In this regard, we specifically discovered that FTA rates may be surprisingly affected by patient specific parameters and clinic specific parameters in unintuitive ways that dynamically change over time.

[8] As such, we developed the present supervised machine learning module for optimising outpatient clinic attendance accordingly that dynamically overbooks a clinic schedule according to the patient specific and clinic specific parameters to optimise outpatient clinic attendance.

[9] The system comprises a trained machine module. The trained machine module is configured for having as input patient specific data and clinic specific data and calculating an attendance failure probability accordingly.

[10] The system further comprises a machine learning module configured for training the trained machine module.

[II] The machine learning module trains the trained machine module using historical training data comprising patient specific training data representing a plurality of patients, clinic specific training data representing a plurality of clinics and attendance training data representing attendance by the plurality of patients for each of the historical clinics.

[12] The machine learning module is configured for optimising the accuracy of the attendance failure probability calculation of the trained machine.

[13] Once the trained machine has been optimised in this way, in use, for a plurality of future clinics, the trained machine is configured for calculating an attendance probability (or probabilities) for the future clinics.

[14] Then, the future clinics are overbooked by a number of patients according to the calculated attendance failure probabilities to generate attendance probability optimised future clinics.

[15] As such, the supervised machine learning system described herein is configured for intelligently overbooking outpatient clinic schedules according to future FTA rates predicted by the system to generate attendance probability optimised future booking schedules.

[16] The supervised machine learning module may identify seemingly unrelated and unintuitive effects on FTA rates. For example, the supervised machine learning system may have detected that patients with diabetes who smoke generally do not attend physiotherapy clinics on Mondays if they live more than 10 km from the hospital. As such, for a clinic scheduled for next Monday, a number of the outpatients may meet such criteria and therefore the FTA rate calculated by the supervised machine learning system may be higher than normal, such as 12%.

[17] As such, the calculated FTA rate may be utilised for calculating a number of overbookings to make for next Monday's clinic schedule.

[18] In embodiments, the number of overbookings may be configured using an FTA percentage reduction setting. For example, the FTA percentage reduction setting 50% so as to aim to reduce the FTA rate by half. As such, for the calculated FTA rate of 12% for next Monday, the overbooking number for a clinic comprising 100 patients would be 6.

[19] As such, with the foregoing in mind, in accordance with one embodiment, there is provided a supervised machine learning system for optimising outpatient clinic attendance, the system comprising: trained machine module configured for: having as input patient specific data and clinic specific data for a plurality of clinics; and calculating an attendance failure probability according to the input patient specific data and clinic specific data; a machine learning module configured for training the trained machine module, wherein the machine learning module trains the trained machine module using historical training data comprising patient specific training data representing a plurality of patients; clinic specific training data representing a plurality of clinics and attendance training data representing attendance by the plurality of patients of the respective historical clinics, and wherein the machine learning module is configured for optimising the accuracy of the attendance failure probability calculation of the trained machine; and a clinic schedule module for scheduling clinics, wherein, in use: for a future clinic schedule comprising patient specific data and clinic specific data, the trained machine is configured for calculating an attendance failure probability for the future clinic schedule; the clinic schedule module is configured for overbooking the future clinic schedule by a number of patients calculated in accordance with the attendance failure probability to generate an attendance probability optimised clinic schedule.

[20] The system may be further configured for identifying a patient for overbooking for the attendance probability optimised clinic schedule.

[21] Identifying the patient may comprise identifying the patient in accordance with wait time data for the patient.

[22] Identifying the patient may comprise identifying the patient in accordance with an expected failure to attend rate for the patient determined by the trained machine module.

[23] The patient may be identified according to a difference of the attendance failure probability for the patient and the attendance failure probability for the future clinic schedule.

[24] The machine learning module may be configured for calculating the attendance failure probability for a particular time period of the future clinic schedule.

[25] The trained machine module may comprise an artificial neural network.

[26] The artificial neural network may comprise input nodes for patient specific data and clinic specific data and at least one output node for the attendance failure probability calculation.

[27] The neural network may comprise at least one hidden layer between the input nodes and the upper nodes.

[28] The machine learning module may be configured for adjusting weightings of the artificial neural network.

[29] The machine learning module may be configured for adjusting the architecture of the artificial neural network.

[30] Adjusting the architecture may comprise adjusting at least one of the number of neurons and number of hidden layers.

[31] The patient specific data may comprise patient demographic data.

[32] The patient in a graphic data may comprise demographic data including at least one of age, gender and residential address.

[33] The patient specific data further may comprise health-related data.

[34] The health-related data may comprise at least one of smoking status, pregnancy status, diabetes status and diagnosis. [35] The clinic specific data may comprise at least one of clinical speciality data, health practitioner specific data, and date time data.

[36] The calculated attendance failure probability may be patient specific.

[37] The machine learning module may be configured for calculating an attendance failure probability for each patient of the future clinic schedule.

[38] The calculated attendance failure probability may be clinic schedule specific.

[39] The machine learning module may be configured for calculating an attendance failure probability for the entire clinic schedule.

[40] The attendance failure probability represents an attendance failure probability distribution.

[41] Calculating an attendance failure probability for the entire clinic schedule may comprise calculating an attendance failure probability for each scheduled patient of the future clinic schedule and combining the attendance failure probabilities to calculate the attendance failure probability distribution.

[42] The attendance failure probability for the entire clinic schedule may be calculated using the number of patients scheduled and the attendance failure probability distribution.

[43] Other aspects of the invention are also disclosed.

Brief Description of the Drawings

[44] Notwithstanding any other forms which may fall within the scope of the present invention, preferred embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings in which:

[45] Figure 1 shows a supervised machine learning system for optimising outpatient clinic attendance in accordance with an embodiment of the present disclosure; and

[46] Figure 2 shows an exemplary data flow for supervised machine learning system for optimising outpatient clinic attendance in accordance with an embodiment of the present disclosure.

Description of Embodiments

[47] For the purposes of promoting an understanding of the principles in accordance with the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Any alterations and further modifications of the inventive features illustrated herein, and any additional applications of the principles of the disclosure as illustrated herein, which would normally occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the disclosure. [48] Before the structures, systems and associated methods relating to the machine learning system for optimising outpatient clinic attendance are disclosed and described, it is to be understood that this disclosure is not limited to the particular configurations, process steps, and materials disclosed herein as such may vary somewhat. It is also to be understood that the terminology employed herein is used for the purpose of describing particular embodiments only and is not intended to be limiting since the scope of the disclosure will be limited only by the claims and equivalents thereof.

[49] In describing and claiming the subject matter of the disclosure, the following terminology will be used in accordance with the definitions set out below.

[50] It must be noted that, as used in this specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.

[51] As used herein, the terms "comprising," "including," "containing," "characterised by," and grammatical equivalents thereof are inclusive or open-ended terms that do not exclude additional, unrecited elements or method steps.

[52] It should be noted in the following description that like or the same reference numerals in different embodiments denote the same or similar features.

[53] Figure 1 shows a supervised machine learning system 100 for optimising outpatient clinic attendance.

[54] In the specific embodiment shown in figure 1, the system 100 takes the form of a distributed web-server architecture and therefore comprises a web server 101 in operable communication with a plurality of client terminals 102 across the Internet 124. It should be noted that not all embodiments need necessarily be limited to this distributed web-server architecture and, in embodiments, the processing functionality may be implemented herein by way of a standalone computing device for example.

[55] Each of the server 101 and client terminals 102 comprise a processor 109 for processing digital data. In operable communication with the processor across a system bus 108 is a memory device 114.

[56] The memory device 114 is configured for storing digital data including computer program code instructions. As such, in use, the processor 109 is configured for fetching these computer code instructions from the memory 114 for interpretation and execution and wherein data results from such execution may be stored within memory 114.

[57] The memory device 114 of the server 101 has been shown as having been divided into logical computer program code instruction modules. These instruction modules may comprise an operating system 107. The operating system 107 may be fetched by the processor 109 during the bootstrap phase. [58] The memory device 140 may further comprise a plurality of applications including a web server application 110 such as the Apache Web server application. The applications may further comprise a hypertext preprocessor 106 and a database server application 111. As such, the web server application 110, upon receiving a web requests, is able to dynamically generate webpage responses utilising the hypertext preprocessor 106 and a database server 111.

[59] For the specific outpatient clinic attendance optimisation computer processes as described herein, there is shown the memory device 114 of the server 101 comprising a plurality of software modules 103 - 105 and respective database tables 115 - 118.

[60] As is shown, the software modules may comprise a trained machine module 104. As will be described in further detail below, the trained machine module 104 has as input patient specific data and clinic specific data 116 and is configured for calculating attendance failure probabilities 117 accordingly.

[61] The modules may further comprise a machine learning module 103 configured for training the trained machine module 104. Specifically, the machine learning module 103 trains the trained machine module 104 utilising historical training data comprising patient specific training data representing a plurality of patients, clinic specific training data representing a plurality of clinics and attendance training data representing attendance by the plurality of patients of the respective historical clinics.

[62] In this way, the machine learning module 103 is configured for optimising accuracy of the attendance failure probability calculation 117 of the trained machine module 104.

[63] The modules may further comprise a scheduler module 118 for scheduling the clinics.

[64] elatedly, the client terminal 102 may have a scheduler module 120 and store patient and clinic data 121 and attendance data 122 for the respective clinic schedules.

[65] Each of the server 101 and the client terminal 102 may comprise a network interface 113 for sending and receiving data across the Internet 124.

[66] Furthermore, each of the server 101 and client terminal 102 may comprise an I/O interface 112 for interfacing with various computer peripherals including human interface and data storage peripherals.

[67] Furthermore, the client terminal 102 may comprise a display device 123 for the display of digital data including the clinic schedules described herein.

[68] Having described the system architecture above, reference is now made to figure 2 illustrating the supervised machine learning system data flow 200 for optimising outpatient clinic attendance.

[69] As is shown, the data flow 200 comprises supervised machine learning 215 comprising the machine learning model 103 and the trained machine module 104. [70] As alluded to above, the machine learning module 103 is configured for training the trained machine module 104.

[71] Specifically, the machine learning module 103 trains the trained machine module 100 for using historical training data 201 which may be obtained via database interface 205.

[72] The historical training data 201 comprises patient specific training data 202 representing a plurality of patients, and clinic specific training data 203 representing a plurality of clinics.

[73] Furthermore, the historical training data 201 comprises attendance training data 204 representing attendance by the plurality of patients of the respective historical clinics.

[74] As can be seen, the trained machine module 104 is configured for outputting an attendance failure probability population 213.

[75] As such, the machine learning module 103 is configured for optimising the accuracy of the attendance failure probability calculation 213 of the trained machine module 104 with reference to the attendance training data 204.

[76] In one embodiment, the trained machine module 104 may take the form of an artificial neural network (ANN). As such, during training, the machine learning module 103 may generate trained data 206 comprising a plurality of weightings 208 for waiting each of the neural paths of the artificial neural network.

[77] Over and above generating weightings for the neural network, the trained data 206 may further comprise architectural modification data 207 to modify and optimise the neural network, such as by modifying the number of neurons, number of layers et cetera.

[78] As such, having trained the trained machine module 104 in this way, in use, the trained machine module 104 is configured for receiving a query 201 comprising a plurality of future clinics schedule 209.

[79] The future clinics schedule data 209 may be obtained from the scheduler module 105. As alluded to above, in embodiments, the server 101 may be sent, or periodically retrieve, the future clinics schedule 209 from the respective client terminals 102 across the Internet 124.

[80] The future clinics schedule data 209 may comprise patient specific data 211 and, in a preferred embodiment, clinic specific data 212.

[81] As such, having such input, the trained machine module 104 is configured for calculating an attendance failure probability 213 for the future clinics schedule 209.

[82] The attendance failure probability 213 may take the form of an attendance failure probability 213 percentage or probability distribution.

[83] Having calculated the attendance failure probability 213 for the future clinics schedule 209, a number of patient overbookings 217 is calculated according to the attendance failure probability 213. [84] The number of overbooking 217 may be modified according to attendance settings 216 as will be described in further detail below.

[85] Having calculated the number of patient overbookings 217, the scheduler module 105 may be configured with the number of patient overbookings 217 to generate an attendance probability optimised clinic schedule 219.

[86] As alluded to above, the optimise schedule 219 is intelligently optimised in accordance with the patient specific data 211 and clinic specific data 212 to dynamically and intelligently mitigate against FTA rates.

[87] Over time, the optimised schedule 219 may be utilised as feedback 220 for further training the machine learning module 103.

[88] For example, using the present system 100, a clinic group having a capacity of 50 clinics may have 60 patients booked (overbooked by 10 patients) but only 49 patients actually attend, so the group is under-attended by 1 (which is a good result). Without the overbooking, the actual attendance may have been around 40, an under-attendance of 10 (or 20%), which is a bad result.

[89] Now, having generally described the dataflow 200 above, there will now be described specific embodiments primarily for illustrative purposes. It should be noted that the specific embodiments are exemplary only and that no technical limitation should be necessarily imputed to all the embodiments accordingly.

[90] Now, in accordance with the specific illustrative embodiment, the client terminal 102 may be operated by a hospital. The hospital is staffed by many health practitioners each having an associated clinic with associated clinics. In this specific illustrative embodiment, the hospital comprises five doctors each having between 5 to 10 clinics available per clinic schedule.

[91] The clinic scheduling may be maintained by the scheduler 120 of the client terminal 102. Furthermore, the client terminal 102 may record the attendance data 122 which may be subsequently utilised for training in the manner described herein.

[92] As such, so as to optimise the outpatient clinic attendance for the hospital, the client terminal is configured for sending the historical attendance data 204 across the Internet 124 to the server 101 for optimisation purposes.

[93] In this illustrative example, the historical training data 201 comprises at least one of patient specific data and clinic specific data and an attendance status indication as to whether an associated previous clinic was attended by the patient.

[94] The patient specific data may comprise such data as patient demographic data, such as age, gender, residential address and the like. [95] The patient specific data may further comprise health related data, such as smoking status, pregnancy, diabetes status, diagnosis and the like.

[96] In further embodiments as will be described in further detail below, the patient specific data may include the time the patient has spent on the waiting list for an clinic. The waiting list time patient specific data may be utilised for prioritising patients when selecting patients for overbooking.

[97] Further patient specific information may relate to the clinic type such as whether the clinic is a first clinic or a review (i.e. checkup) clinic.

[98] The patient specific data may further comprise other relevant data for optimising the schedule.

[99] The clinic specific data may comprise various data including the clinical speciality, such as neurology, cardiology and the like. The clinic specific data may further comprise health practitioner specific data.

[100] Furthermore, the attendance status indication, such as in Boolean format, indicates whether the historical clinic was attended to or not. Date and time specific information for the historical clinic may be recorded also, such as the day of the week, month, time of day and the like.

[101] For example, the historical training data 201 may indicated that a 42-year-old male smoker with diabetes and living at a residential address 15 km from the hospital had failed to attend a checkup clinic at 2 PM on Monday, 26 September 2017 with cardiologist Dr John Smith.

[102] As alluded to above, the supervised machine learning 205 may comprise the trained machine module 104 being trained by the machine learning module 103 using the historical training data 201.

[103] For the embodiment wherein the trained machine module 104 comprises an artificial neural network, the artificial neural network may comprise input nodes for the patient specific data 211 and clinic specific data 212, a number of hidden layers and a predicted attendance failure probability 213 output node.

[104] Now, during the training phase, the weighting of the artificial neural network may be trained. Specifically, as the historical training data 201 is fed into the machine learning module 103, the machine learning module 103 adjusts the weights of the artificial neural network so as to reduce the output error of the calculated predicted attendance failure probability 213 when compared to the input attendance training data 204.

[105] In embodiments, the architecture of the artificial neural network may be fixed. However, during the training phase, the machine learning module 103 may additionally optimise the architecture of the artificial neural network.

[106] Now, once having been trained, the trained artificial neural network is utilised to calculate the expected/predicted attendance failure probability 213. [107] The output expected attendance failure probability 213 may either be specific to a particular clinic or to a group of clinics. For the former, the output attendance failure probability 213 may represent, for example, that for Dr John Smith, the expected attendance failure probability 213 for a particular clinic day/period would be 10% for the specific patient or clinic.

[108] For the latter, the expected attendance failure probabilities 213 could be for a group of clinics such as a group of clinics comprising five doctors wherein, for the group of clinics, there would be a combined attendance failure probability 213 of 10% for a particular day/clinic period. Specifically, in embodiments, the machine learning module 103 may calculate a probability of non-attendance for each scheduled patient. Then, these probabilities are combined to obtain the probability of non- attendances for each clinic, that is, the probability of 0 non-attendances, the probability of 1 non- attendances, the probability of 2 non-attendances, the probability of K non-attendances, where K is the number of patients booked into the clinic.

[109] Similarly, clinics can be grouped wherein the same approach applies to obtain non-attendance failure probabilities for the group. I.e., the probability of 0 non-attendances, the probability of K non-attendances, where K is the total number of patients booked into the group. As a clinic group has 1 or more clinics the distinction between a clinic and a group of clinics is immaterial, because the former is a special case of the latter. Combining the patient-level probabilities into clinic or group-level probabilities may use the Poisson-Binomial distribution, calculated via fast fourier transforms.

[110] For example, the trained artificial neural network may have detected that patients with diabetes who smoke generally do not attend their physiotherapy clinics on Mondays if they live more than 10 km from the hospital. As such, for a clinic scheduled for next Monday, a number of the outpatients may meet such criteria and therefore the attendance failure probability 213 output by the artificial neural network would be higher than normal, such as 12%.

[Ill] In embodiments, the attendance failure probability 213 rate 37 may be generated in the format of an attendance failure probability distribution.

[112] Now, the output attendance failure probability 213 may be utilised for calculating a number of overbookings 217 to make. The number of overbookings 217 may be determined in accordance with attendance setting 216 which may include the above-described failure to attend percentage reduction setting or the over attendance rate. For example, the failure to attend percentage reduction setting may be configured at 50% so as to aim to reduce the FTA rate by half. As such, for the calculated FTA rate 37 of 12% for next Monday, the overbooking number 217 for a clinic comprising 100 clinics would be 6.

[113] As such, the scheduler module 105 would then update the outpatient schedule 118. In embodiments, the server 101 would update the schedule of the client terminal 102 remotely such as by having access to the schedule. In alternative embodiments, the server 101 may send the number of overbookings 217 to the client terminal 102 such that the client terminal 102 is able to update the schedule itself.

[114] In embodiments, the system 1 may implement a "dummy schedule" representing a schedule comprising outpatients who may attend any of the available health practitioners in a given clinic group when available. For example, the additional six outpatients may be allocated to the dummy schedule such that, at any time, should an outpatient fail to attend a particular clinic, any of the outpatients of the dummy schedule may be allocated to the relevant slot.

[115] In embodiments, the clinics may run for predetermined time periods, such as between 9 AM and 12 PM. As such, in this embodiment, the system 1 need only allocate the overbookings to the dummy schedule corresponding to this time period wherein those outpatients allocated to the dummy schedule may be required to wait for an available timeslot during this time period. In other words, in this embodiment, the attendance failure probability 213 may be calculated for each daily clinic schedule.

[116] However, in further embodiments, the attendance failure probabilities 213 may have greater time period granularity so as to aim to reduce the waiting period for outpatients on the dummy schedule. For example, the artificial neural network may have further calculated that the above exemplary diabetic outpatient group is more likely to fail to attend clinics after lunch on Mondays. As such, for the additional outpatients to be scheduled, the scheduling module 2.3 would request that the overbooked outpatients attend the clinic at the relevant time after lunch such as around 2 PM.

[117] In further embodiments, as opposed to calculating only a number of outpatients to overbook, the system 1 may calculate patient specific overbooking data. In this embodiment, when overbooking, the system 1 is configured for selecting specific outpatients for overbooking. In one embodiment, such selection may be in accordance with waiting times wherein those outpatients being on the waiting list for longer are favoured.

[118] In further embodiments, the system 1 may select outpatients similarly in accordance with attendance failure probability 213 rates. For example, when overbooking the schedule to account for the above-described diabetics who fail to attend clinics on Mondays, the system 100 may favour outpatients having differing patient specific data so as to, for example, not select further diabetic outpatients who may themselves similarly failed to attend the allocated clinics. Interpretation

Wireless:

[119] The invention may be embodied using devices conforming to other network standards and for other applications, including, for example other WLAN standards and other wireless standards. Applications that can be accommodated include IEEE 802.11 wireless LANs and links, and wireless Ethernet.

[120] In the context of this document, the term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. In the context of this document, the term "wired" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a solid medium. The term does not imply that the associated devices are coupled by electrically conductive wires.

Processes:

[121] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing", "computing", "calculating", "determining", "analysing" or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.

Processor:

[122] In a similar manner, the term "processor" may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A "computer" or a "computing device" or a "computing machine" or a "computing platform" may include one or more processors.

[123] The methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM.

Computer-Readable Medium :

[124] Furthermore, a computer-readable carrier medium may form, or be included in a computer program product. A computer program product can be stored on a computer usable carrier medium, the computer program product comprising a computer readable program means for causing a processor to perform a method as described herein.

Networked or Multiple Processors:

[125] In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.

[126] Note that while some diagram(s) only show(s) a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described in order not to obscure the inventive aspect. For example, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

Additional Embodiments:

[127] Thus, one embodiment of each of the methods described herein is in the form of a computer- readable carrier medium carrying a set of instructions, e.g., a computer program that are for execution on one or more processors. Thus, as will be appreciated by those skilled in the art, embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause a processor or processors to implement a method. Accordingly, aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.

Carrier Medium :

[128] The software may further be transmitted or received over a network via a network interface device. While the carrier medium is shown in an example embodiment to be a single medium, the term "carrier medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "carrier medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.

Implementation :

[129] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system.

Means For Carrying out a Method or Function

[130] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a processor device, computer system, or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.

Connected

[131] Similarly, it is to be noticed that the term connected, when used in the claims, should not be interpreted as being limitative to direct connections only. Thus, the scope of the expression a device A connected to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Connected" may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.

Embodiments:

[132] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.

[133] Similarly it should be appreciated that in the above description of example embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description of Specific Embodiments are hereby expressly incorporated into this Detailed Description of Specific Embodiments, with each claim standing on its own as a separate embodiment of this invention.

[134] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.

Different Instances of Objects

[135] As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.

Specific Details

[136] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Terminology

[137] In describing the preferred embodiment of the invention illustrated in the drawings, specific terminology will be resorted to for the sake of clarity. However, the invention is not intended to be limited to the specific terms so selected, and it is to be understood that each specific term includes all technical equivalents which operate in a similar manner to accomplish a similar technical purpose. Terms such as "forward", "rearward", "radially", "peripherally", "upwardly", "downwardly", and the like are used as words of convenience to provide reference points and are not to be construed as limiting terms.

Comprising and Including

[138] In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" are used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.

[139] Any one of the terms: including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.

Scope of Invention

[140] Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.

[141] Although the invention has been described with reference to specific examples, it will be appreciated by those skilled in the art that the invention may be embodied in many other forms.