ARSLAN TUGHRUL SATI (GB)
ALSEHLY FIRAS (GB)
SEVAK ZANKAR UPENDRAKUMAR (GB)
WO2009021068A1 | 2009-02-12 |
US20110282620A1 | 2011-11-17 | |||
US20080077326A1 | 2008-03-27 | |||
DE102008054739A1 | 2010-06-17 | |||
US20100125414A1 | 2010-05-20 | |||
US20090143199A1 | 2009-06-04 |
Claims 1. A method of estimating the position of a user device carried by a user, the method comprising: providing an initial position of the user device; measuring a vertical acceleration of the user device, thereby generating vertical acceleration data; measuring an orientation of the user device, thereby generating principal direction data; processing the vertical acceleration data to detect one or more steps taken by the user, and subsequently validating the detection of one or more of said one or more steps; generating one or more motion vectors in respect of the validated steps taken by the user using the principal direction data; and estimating an updated position of the user device by combining the motion vector(s) with the initial position. 2. A method of estimating the position of a user device carried by a user according to claim 1 wherein processing the vertical acceleration data comprises identifying a plurality of local maxima and local minima of vertical acceleration. 3. A method of estimating the position of a user device carried by a user according to claim 2 wherein processing the vertical acceleration data further comprises comparing each local maximum with a chronologically adjacent local minimum to detect whether a step has been taken by the user. 4. A method of estimating the position of a user device carried by a user according to claim 3 wherein comparing each local maximum with a chronologically adjacent local minimum comprises comparing a magnitude of each local maximum with a magnitude of a chronologically subsequent local minimum to determine a magnitude difference value and comparing the magnitude difference value with a magnitude threshold value. 5. A method of estimating the position of a user device carried by a user according to claim 3 or 4 wherein comparing each local maximum with a chronologically adjacent local minimum comprises comparing a chronological index associated with each local maximum with a chronological index associated with a or the chronologically subsequent local minimum to determine a chronological difference value and comparing the chronological difference value with a chronological reference value. 6. A method of estimating the position of a user device carried by a user according to claim 4 or claim 5 wherein, for each local maximum and local minimum from which a step has been detected, validating the detection of said one or more steps comprises comparing the magnitude of each local minimum with the magnitude of a chronologically subsequent local maximum to determine a validation magnitude difference value and comparing the validation magnitude difference value to a validation magnitude threshold value. 7. A method of estimating the position of a user device carried by a user according to any one of claims 4 to 6 wherein, for each local maximum and local minimum from which a step has been detected, validating the detection of said one or more steps comprises comparing a chronological index associated with each local minimum with a chronological index associated with a chronologically subsequent local maximum to determine a validation chronological difference value and comparing the validation chronological difference value to a validation chronological difference threshold value. 8. A method of estimating the position of a user device carried by a user according to any one preceding claim wherein a plurality of steps are detected, and wherein validating the detection of the plurality of steps comprises: determining a normalised chronological index interval between chronologically adjacent steps; comparing chronological index intervals between chronologically adjacent steps with the normalised chronological index interval; and validating the detection of each step if a difference between the normalised chronological index interval and the chronological index interval between that step and a preceding step is within a predetermined range of the normalised chronological index interval and/or the chronological index interval between that step and a subsequent step is within a predetermined range of the normalised chronological index interval. 9. A method of estimating the position of a user device carried by a user according to any one preceding claim wherein processing the vertical acceleration data comprises low pass filtering the vertical acceleration data. 10. A method of estimating the position of a user device carried by a user according to any one preceding claim further comprising updating a vertical position of the user device. 1 1. A method of estimating the position of a user device carried by a user according to claim 10 further comprising updating the vertical position of the user device using the vertical acceleration data and the principal direction data, or data derived therefrom. 12. A method of estimating the position of a user device carried by a user according to any one preceding claim further comprising: generating a plurality of motion vectors; and generating floor change data by comparing two or more of the plurality of motion vectors, or data derived from said two or more of the plurality of motion vectors, to reference data to determine whether the user device has changed floor within a building. 13. A method of estimating the position of a user device carried by a user according to claim 12 wherein the reference data comprises one or more candidate reference patterns, each candidate reference pattern comprising data representing a plurality of motion vectors arranged in a specific order relating to the relative movements required to ascend or descend a particular staircase within the building. 14. A method of estimating the position of a user device carried by a user according to claim 13 wherein the step of generating floor change data comprises: comparing said two or more motion vectors to one or more candidate reference patterns comprised in the reference data; determining that said two or more motion vectors conform to one of the one or more candidate reference patterns; and retrieving floor change data associated with said one of the one or more candidate reference patterns from a database of floor change data. 15. A method of estimating the position of a user device carried by a user according to any one of claims 12 to 14 further comprising processing the vertical acceleration data to determine one or more vertical movement indicators and processing the one or more vertical movement indicators to indicate whether the user device has moved up or down floors within the building. 16. A method of estimating the position of a user device carried by a user according to claim 15 further comprising: enabling an algorithm for comparing said two or more motion vectors to reference data to determine whether the user device has changed floor within a building in response to one or more vertical movement indicators indicating that the user device has moved up or down floors. 17. A method of estimating the position of a user device carried by a user according to any one of claims 12 to 16 as dependent on claim 10 or claim 1 1 comprising using the floor change data to update the vertical position of the user device. 18. A method of estimating the position of a user device carried by a user according to any one preceding claim further comprising processing the vertical acceleration data to determine an average stride length of the user and using the average stride length to generate the motion vector. 19. A method of estimating the position of a user device carried by a user according to any one preceding claim further comprising updating an estimated position of the user device using an alternative positioning method when said alternative positioning method is available and meets one or more accuracy criteria. 20. A method of estimating the position of a user device carried by a user according to claim 19 further comprising comparing a position of the user device estimated by the alternative positioning method to the updated position of the user device to determine one or more accuracy measurements of the updated position, and adjusting the magnitude and/or chronological threshold values and/or an or the estimated average stride length of the user if the accuracy measurement(s) fail(s) one or more second accuracy criteria. 21. A method of estimating the position of a user device carried by a user according to any one preceding claim further comprising receiving one or more signals from an electromagnetic signal source, processing the one or more received signals to estimate a position of the electromagnetic signal source using the updated position and creating, updating or correcting a database of electromagnetic signal source positions using the estimated position of the electromagnetic signal source. 22. A method of estimating the position of a user device carried by a user according to any one preceding claim further comprising: determining that the user device has changed floors within a building; and generating a reference pattern representative of the floor change performed by the user device by identifying and storing two or more motion vectors generated in respect of validated steps taken by the user during said floor change. 23. A method of estimating the position of a user device carried by a user according to claim 22 wherein the step of determining that the user device has changed floors within a building comprises: receiving one or more electromagnetic signals identifying one or more electromagnetic signal sources or a vertical position from which the electromagnetic signal(s) were transmitted; and processing the received electromagnetic signals to determine the vertical position of one or more electromagnetic signal sources. 24. A user device comprising a vertical acceleration measurement module operable to measure a vertical acceleration of the user device; an orientation measurement module operable to measure an orientation of the user device; and a controller in electronic communication with the vertical acceleration and orientation measurement modules, the controller being operable to: process vertical acceleration data generated by the vertical acceleration module to detect one or more steps taken by the user, and to subsequently validate the detection of one or more of said one or more steps; generate one or more motion vectors in respect of the validated steps taken by the user using principal direction data generated by the orientation measurement module; and estimate an updated position of the user device by combining the motion vector(s) with an initial position of the user device. 25. A user device according to claim 24 further comprising an alternative positioning system module in electronic communication with the controller for estimating a position of the user device. 26. A user device according to claim 25 wherein the alternative positioning system module comprises a satellite positioning system receiver. 27. A user device according to claim 25 or claim 26 wherein the alternative positioning system module comprises an electromagnetic signal receiver. 28. A method of estimating the position of a user device carried by a user, the method comprising: measuring an acceleration of the user device, thereby generating acceleration data; measuring a direction of movement of the user device, thereby generating direction data; processing the acceleration data to detect two or more steps taken by the user; generating two or more motion vectors relating to two or more of the detected steps taken by the user taking into account the direction data; and generating floor change data by comparing the said two or more motion vectors, or data derived from the said two or more motion vectors, with reference data to determine whether the user device has changed floors within a building. 29. A method of estimating the position of a user device carried by a user according to claim 28 wherein the reference data comprises one or more candidate reference patterns, each candidate reference pattern comprising data representing a plurality of motion vectors arranged in a specific order relating to the relative movements required to change floors within the building. 30. A method of estimating the position of a user device carried by a user according to claim 29 wherein the step of generating floor change data comprises: comparing said two or more generated motion vectors to one or more candidate reference patterns comprised in the reference data; determining that said two or more generated motion vectors conform to one of the one or more candidate reference patterns; and obtaining floor change data associated with said one of the one or more candidate reference patterns from a database of floor change data. 31. A method of estimating the position of a user device carried by a user according to any one of claims 28 to 30 further comprising processing the acceleration data to determine one or more vertical movement indicators and processing the one or more vertical movement indicators to indicate whether the user device has moved up or down floors within the building. 32. A method of estimating the position of a user device carried by a user according to claim 31 further comprising: enabling an algorithm for comparing said two or more generated motion vectors to reference data to determine whether the user device has changed floor within a building in response to one or more vertical movement indicators indicating that the user device has moved up or down floors. 33. A method of estimating the position of a user device carried by a user according to any one of claims 28 to 32 further comprising using the generated floor change data to update an estimated vertical position of the user device. 34. A non-transitory computer readable medium retrievably storing computer readable code for causing a computer to perform the steps of the method according to any one of claims 28 to 33. 35. A user device comprising an acceleration measurement module configured to measure an acceleration of the user device; a direction measurement module configured to measure a direction of movement of the user device, thereby generating direction data; and a controller in electronic communication with the acceleration and direction measurement modules, the controller being configured to: process acceleration data generated by the acceleration module to detect two or more steps taken by the user; generate two or more motion vectors in respect of two or more of the detected steps taken by the user using the direction data generated by the direction measurement module; and to generate floor change data by comparing the said two or more motion vectors, or data derived from the said two or more motion vectors, with reference data to determine whether the user device has changed floors within a building. 36. The user device of claim 35 wherein the reference data comprises one or more candidate reference patterns, each candidate reference pattern comprising data representing a plurality of motion vectors arranged in a specific order relating to the relative movements required to change floors within the building. 37. The user device according to claim 36 wherein the controller is configured to generate floor change data by: comparing said two or more generated motion vectors to one or more candidate reference patterns comprised in the reference data; determining that said two or more generated motion vectors conform to one of the one or more candidate reference patterns; and obtaining floor change data associated with said one of the one or more candidate reference patterns from a database of floor change data. 38. The user device according to any one of claims 35 to 37 wherein the controller is configured to process the acceleration data to determine one or more vertical movement indicators and to process the one or more vertical movement indicators to indicate whether the user device has moved up or down floors within the building. 39. The user device according to claim 38 wherein the controller is configured to: enable an algorithm for comparing said two or more generated motion vectors to reference data to determine whether the user device has changed floor within a building in response to one or more vertical movement indicators indicating that the user device has moved up or down floors. 40. A user device according to any one of claims 35 to 39 wherein the controller is configured to use the generated floor change data to update an estimated vertical position of the user device. |
A min is the average (typically mean) magnitude of the lowest 10% by magnitude of the vertical acceleration measurements;
A av g is the average (typically mean) magnitude of all of the vertical acceleration measurements; and
C is a fixed co-efficient which may be determined empirically. Typically, a value of 0.25 is suitable, and provides most accurate results for an actual stride length of between 500mm and 600mm. This formula provides an effective trade-off between accuracy and efficient power consumption by the controller 2. At a next step 68, preliminary step detection is performed. In order to detect one or more steps taken by the user carrying the user device 1 , firstly, as described above, the controller 2 low pass filters the vertical acceleration data to remove fluctuations in vertical acceleration data not caused by steps taken by the user. Next, the controller 2 identifies the local maxima and local minima of the filtered vertical acceleration data and stores their magnitudes and chronological indices in memory 12. For the purposes of the following discussion it will be assumed that the filtered acceleration data is identical to the example shown in Figure 2B and thus has eight local maxima 20-34 and eight local minima 36-50. In order to determine whether a step may have been taken by the user, the controller 2 compares each local maximum with a chronologically adjacent and subsequent local minimum. Firstly, the magnitude of each local maximum is compared with the magnitude of the chronologically adjacent and subsequent local minimum to determine a magnitude difference value. For example with reference to Figure 2B, the magnitude of local maximum 20 is approximately 17ms "2 while the magnitude of the chronologically adjacent and subsequent local minimum 36 is approximately 4.5ms "2 . Accordingly the magnitude difference value is 13.5ms "2 . Secondly, the chronological index (in the example shown in Figures 2A and 2B, the chronological index is a relative time stamp) of each local maximum is compared with the chronological index of the chronologically subsequent local minimum to determine a chronological difference value. In the example referred to above the chronological index associated with the local maximum 20 is approximately 2.1 seconds while the chronological index associated with local minimum 36 is approximately 2.4 seconds. Accordingly the chronological difference between the local maximum 20 and local minimum 36 is approximately 300ms. This chronological index is stored in memory 12. Next, a relationship table, such as the following, may be used to estimate whether or not a step has been taken by the user:
where D is the magnitude difference value and T is the chronological difference value. Thus when D is greater than a magnitude threshold value, in this case 4.5ms "2 , and T is less than a chronological reference value, in this case 500ms, a step is detected from the local maximum and the local minimum. If either of these conditions is not met, the second test described on the second line of the relationship table is performed. That is if D is between first and second magnitude threshold values 3ms "2 and 4.5ms "2 , and T is between first and second chronological reference values 97ms and 500ms, a step is detected from the local maximum and the local minimum. If neither of these conditions are met, no step is detected from that local maximum and local minimum combination. It will be understood that other (more or less) complex tests may be performed in the preliminary step detection process 68. It will be understood that, in some embodiments, only the magnitude, or chronological index of chronologically adjacent local maxima and minima may be compared to determine whether a step may have been taken by the user. However, preferably both the magnitudes and chronological indices are compared. The chronological index associated with each detected step is the chronological index of the local minimum associated with that step. Therefore the step associated with local maximum 20 and local minimum 36 is considered to have been taken at a chronological index of approximately 2.4 seconds. In the next step 70, a step validation process is performed. Firstly, the magnitude of the local minimum associated with each detected step is compared with the magnitude of the chronologically subsequent and adjacent local maximum to determine a validation magnitude difference value. Additionally or alternatively, the chronological index of the local minimum associated with each detected step is compared with the chronological index of the chronologically subsequent and adjacent local maximum to determine a validation chronological difference value. If the validation magnitude difference value is greater than a validation magnitude threshold value, and the validation chronological difference value is less than the validation chronological difference threshold value, the step associated with that local minimum is validated. Conversely, if the validation magnitude difference value is less than the validation magnitude threshold value, and/or the validation chronological difference value is greater than the validation chronological difference threshold value, the step associated with that local minimum is invalidated. In an exemplary embodiment, the validation magnitude threshold value may be 2.5ms "2 , while the validation chronological difference threshold value may be 900ms. For the step associated with local maximum 20 and minimum 36, the validation magnitude difference value is approximately 16.5ms "2 while the validation chronological difference value is approximately 400ms. The validation magnitude difference value is thus greater than the validation magnitude threshold value and the validation chronological difference value is less than the chronological difference threshold value. Accordingly, the step associated with local maximum 20 and minimum 36 is validated. By comparing the local minimum from which a step has been detected to a chronologically subsequent local maximum which was not used to detect that step, further context is taken into account regarding the movements of the user which helps to validate or invalidate the detection of the one or more steps. As an additional or alternative validation measure, a search for anomalies of step interval may be performed. In this case, chronological index intervals between chronologically adjacent detected steps may be determined. In practice, this typically involves calculating the difference in chronological index between the local minima associated with adjacent detected steps. In addition, a normalised chronological index interval may be determined by averaging all of the determined chronological index intervals. For example, the normalised chronological index interval may be the mean of all the determined chronological index intervals. Then, for each detected step, the controller 2 may determine whether a difference between the normalised chronological index interval and the chronological index interval between that step and a preceding detected step is within a predetermined range of the normalised chronological index interval. Additionally or alternatively, it may be determined whether the difference between the normalised chronological index interval and the chronological index interval between that step and a subsequent detected step is within a predetermined range of the normalised chronological index interval. If both of these intervals are within the predetermined ranges of the normalised chronological index interval, the detected step is validated. If either or both of these intervals are not within the respective predetermined ranges of the normalised chronological index interval, this step is invalidated. By comparing the chronological index interval between chronologically adjacent steps with a normalised chronological index interval, outlier intervals can readily be identified and the steps associated with the outlier intervals can thus be invalidated before any motion vectors are generated (see below). This helps to improve the accuracy of the estimation of the updated position of the user device 1. In the next step 72, the controller 2 determines the orientation of the user device during each validated detected step. In order to determine the orientation of the user device during a given step, directional measurements made by the compass 8 between the local maximum and local minimum associated with that step may be averaged (e.g. by taking the mean direction of the compass readings). If the compass reading of a validated detected step varies from the orientation of a preceding validated detected step by more than a predetermined threshold, measurements made by the gyroscope 6 between the two steps may be used to validate or invalidate the compass reading. For example, this may be done by integrating the gyroscope measurements taken between the local maximum and minimum associated with that step on a horizontal plane. If the compass reading is validated by the gyroscope measurements, the average compass direction will be validated as the direction of the step. If the compass reading is invalidated by the gyroscope measurements, the direction of the step will be taken to be the orientation of the user device during the preceding validated detected step. Following detection and validation of the steps taken by the user, determination of the average stride length of the user, and the determination of the orientation of the user device from the compass/gyroscope measurements, a walk-path motion vector is generated in step 74 for each validated detected step. Each step is considered to have followed a distance equal to the average stride length in the direction derived from the orientation of the user device as described above. A plurality of the walk- path motion vectors may be subsequently combined if, for example, they indicate movement in the same direction. Once generated, the motion vectors are stored in a vector buffer of memory 12 for further processing (see below). Typically, the walk-path motion vector is a 2D motion vector which describes movement of the user in two dimensions (typically a horizontal plane). However, one or more vertical motion indicators may also be derived from the vertical acceleration data. For example, it has been empirically determined that, if all of the following three conditions are met, the user has moved vertically upwards:
"ave_max ι υ where: A ave _ ma x is the average magnitude of the local maxima detected in step
68; and
A ave min is the average magnitude of the local minima detected in step 68. In addition, it has been empirically determined that, if both of the following two conditions are met, the user has moved vertically downwards: Aave min and A ave _ m ax may be averaged over the time period to determine a vertical motion indicator over that time period. In this case, the vertical motion indicators indicate whether vertically up or vertically down movement has been performed by the user over the time period. Such indicators typically provide binary assessments as to whether the user has moved upwards (e.g. climbed a set of stairs) or downwards (e.g. descended a set of stairs), but are not typically suitable for determining vertical position accurately. The algorithm described by Figure 3B may further comprise updating a vertical position of the user device (e.g. a label indicating on which floor of a building the user device is currently located). This vertical position may be determined from data input manually by a user (i.e. manual "check-in" data) indicating on which floor the user device is currently located. Alternatively, the vertical movement indicators may simply be used to determine whether the user has moved vertically upwards, vertically downwards or otherwise. Alternatively, the vertical position of the user device may be determined from another available positioning system such as a satellite positioning system. As another alternative, as described below, the vertical position of the user device may be determined from the vertical acceleration data and the principal direction data. In step 76 it may be determined whether a user has moved to a different floor of a building in order to improve the estimate of the (particularly the vertical) position of the user device 1 . In this step, the walk-path motion vectors generated from vertical acceleration and principal direction data generated by the user device over a particular time interval may be compared to reference data to determine whether a user has climbed or descended a set of stairs. The reference data typically comprises one or more candidate reference patterns, each candidate reference pattern comprising a plurality of motion vectors arranged in a specific order relating to the relative movements required by the user to ascend or descend a particular staircase within the building. In this case, two or more of the generated motion vectors are compared to the candidate reference patterns and, if the motion vectors conform to one of the one or more candidate reference patterns representative of a floor change, floor change data associated with the conforming reference pattern is retrieved from a database of floor change data stored within the memory 12. A vertical position of the user device may then be updated accordingly (see below). Typically, only two or more motion vectors generated from data generated by the user device during the particular time interval may be compared to the reference patterns. The particular time interval may comprise one or more time periods, or one or more fractions of one or more time periods. An exemplary reference pattern is illustrated in Figures 4A to 4C. Figures 4A and 4B are side and plan views of a staircase 80. In order to climb the staircase from point A to point C, a user must take a plurality of steps in one lateral direction (left on Figures 4A and 4B) before taking a plurality of steps in an opposite lateral direction (right on Figures 4A and 4B). This is illustrated by the reference pattern in Figure 4C. The user must also move vertically upwards. Accordingly, if, for example, the relevant vertical indicators indicate vertically upwards motion between points A and C, the steps associated with three successive validated, detected steps (within the particular time interval) are generally in the left direction and the steps associated with three subsequent successive steps (within the particular time interval) are generally in the right direction, floor change data may be generated by the controller 2 indicating that the user has climbed the staircase 80. Conversely, the relevant vertical indicator(s) indicate vertically downwards motion between points C and A, the steps associated with three successive validated, detected steps (within the particular time interval) are in the left direction and the steps associated with three subsequent successive steps (within the particular time interval) are generally in the right direction floor change data may be generated by the controller 2 indicating that the user has descended the staircase 80. Prior knowledge of a user's current floor may also be used to determine or subsequently validate whether a user has climbed or descended a staircase. The floor change data is stored in the memory 12. It will be understood that complete conformance of said motion vectors and the reference pattern is not necessary. In some embodiments, a minimum error (e.g. least squares error) algorithm may be employed to determine which of the reference patterns best matches the motion vectors. In some cases, the minimum error must be less than an error threshold for a conformance to be determined. It will also be understood that some steps which extend neither in the left nor right directions may occur between the successive left and successive right steps to account for a landing (e.g. at point B) between the left and right parts of the staircase 80. In addition, it will be understood that only one vertical indicator may need to be analysed if all steps in the particular time interval are provided in the same time period. Alternatively, a plurality of vertical indicators may need to be analysed if the relevant steps occur in different time periods. When the walk-path motion vectors have been determined and stored in the vector buffer, and the floor change data has been generated and stored in memory 12, an updated position of the user device 1 can be determined in step 78 by combining the motion vectors with the initial position of the user device 1 and by updating a label (e.g. floor number) on the user device 1 indicating which floor the user is currently on in accordance with the floor change data. This updated position may then be reported to the user, for example by updating a position indicator on a map displayed on the user device 1. The method of Figure 3B may then return to step 60 so that the above steps may be repeated. Preferably, for example to save power, step 76 may only be enabled in response to one or more vertical movement indicators which indicate that the user has moved up or down floors within a building in order to minimise the processing required. Because the accelerometer 4, compass 6 and gyroscope 8 are provided internally to the user device 1 , the user device 1 does not need to communicate with any external devices (e.g. satellites, wireless access points) or have access to a data communications network or a server in order to estimate its position. Accordingly, particularly but not exclusively, the method for estimating the position of the user device 1 illustrated in Figures 3A and 3B may be used when other position estimation technologies are unavailable (e.g. when no line of sight is present between the user device and satellites for satellite positioning, or when no known Wireless Access Points are in range of the user device or no access to a server via a data communications network is available) or the other position estimation technologies are not available to provide a greater accuracy than the method illustrated in Figures 3A and 3B. The accelerometer 4 may be a 1 or 2-axis accelerometer arranged to detect vertical acceleration. Alternatively, the accelerometer 4 may be a 3-axis accelerometer operable to detect vertical acceleration. As indicated above, the user device 1 may be operable to use positioning systems other than the satellite positioning system and the method described above using the accelerometer 4 and the compass 8/gyroscope 6. For example, the user device 1 may comprise an electromagnetic signal receiver module in electronic communication with the controller 2, the electromagnetic receiver module being operable to receive electromagnetic signals transmitted by electromagnetic signal sources (e.g. Wi-Fi, Bluetooth, GSM base stations, near-field-communication beacons etc) of known position. In this case, the controller 2 is operable to estimate the position of the user device 1 , for example by triangulation (or by reference to a map of electromagnetic signal source fingerprint data). When an alternative positioning method, such as a satellite positioning system or an electromagnetic signal source based positioning system (or other suitable positioning system) is available to the user device 1 , and that alternative positioning method meets one or more (absolute or relative) accuracy criteria (e.g. it is accurate to within a predetermined radius or it is of greater accuracy than the method of Figures 3A and 3B) the user device 1 may prioritise that/those alternative positioning methods over the method described in Figures 3A and 3B. The error of the position estimated using the method of Figures 3A and 3B may be estimated using the following formulae: Error e stimated_ P osition = ErronnitiaLposition + Accumulated_Error Acculumated_Error = N ste ps / 10 metres where Error es timated_position is the estimated error in the position estimated using the method of Figures 3A and 3B; ErronnitiaLposition is the estimated error in the initial position; Accumulated_Error is the estimated error accumulated during use of the method of Figures 3A and 3B; and N ste ps is the number of validated steps since the initial position. Satellite positioning systems such as GPS typically provide estimates of the possible errors present in their position estimates. Accordingly, the estimated error provided by a satellite positioning system may be compared to Error es timated_position in order to determine whether the satellite positioning system overcomes a relative accuracy criterion stipulating that the estimated error of the alternative positioning method must be less than Error es timated_position for the alternative positioning method to be prioritised over the method described in Figures 3A and 3B. The estimated error provided by the satellite positioning system may additionally or alternatively be compared against one or more absolute accuracy criteria (e.g. accurate to within 20metres) to determine whether the position estimates provided by the satellite positioning system should be prioritised over the method described in Figures 3A and 3B. The possible errors present in the position estimates provided by a positioning system (e.g. a triangulation based positioning system) based on processing signals received from electromagnetic signal sources of known position, may be estimated locally by the user device 1. For example, although it will be understood that any suitable algorithm may be used, one such suitable algorithm would be a Best Candidate Set algorithm. In this case, the controller 2 calculates the n best estimates of the position of the user device 1 based on the electromagnetic signals received by the user device 1 (the best of the n best estimates typically being selected as the estimated position of the device). Next, the distances between the best estimate of the position of the user device 1 and all of the other (n-1) best estimates are calculated. These calculated distances are then processed to determine an average (e.g. mean) distance between the best estimate and the other (n-1) best estimates. This average distance is then returned as the estimated error in the estimated position of the device 1. This estimated error can then of course be compared to Error es timated_position to determine whether the said positioning system based on processing signals received from electromagnetic signal sources of known position meets the said relative accuracy criterion and/or against one or more absolute accuracy criteria in order to determine whether the said positioning system should be prioritised over the method described in Figures 3A and 3B. The position of the user device estimated by the alternative positioning method may also need to be validated before it can be prioritised over the position estimated by the method of Figures 3A and 3B. This may involve estimating a second position of the user device using the alternative positioning method, comparing the second estimated position with the first estimated position made using the alternative positioning method to determine a difference value, and comparing the difference value to a threshold value. If the difference value is less than the threshold value, it may be determined that the first estimated position was not determined in error and can therefore be validated. If the difference value is greater than the threshold value, it may be determined that the first estimated position was determined in error and should therefore be invalidated, in which case the first estimated position may not be used to estimate the position of the user device. The accuracy criteria may stipulate that the alternative positioning method needs to be more accurate than the positioning system of Figures 3A and 3B by a threshold amount (e.g. 10%, 20% or 30%) before the alternative positioning method is prioritised over the positioning system of Figures 3A and 3B. The accuracy criteria may additionally or alternatively include an absolute accuracy criterion which stipulates that the estimated accuracy of the alternative positioning method must be accurate to a sufficient degree before said alternative positioning method is prioritised over the method of Figures 3A and 3B. For example, the alternative positioning method may need to be accurate to within a predetermined distance range (e.g. accurate to within a radius of 10 metres or 20 metres). The method described in Figures 3A and 3B may thus further include the step of resetting the estimated position of the user device 1 using an alternative positioning method when such an alternative positioning method is available to a sufficient degree of accuracy. Alternatively, the method of Figures 3A and 3B may be interrupted when an alternative positioning method becomes available which meets the accuracy criteria. At this point, the buffer section of the memory 12 may be cleared. A position of the user device 1 determined from the alternative positioning method may then be used as the initial position in the method of Figures 3A and 3B, for example when said alternative positioning method(s) are unavailable to the user device 1 (e.g. in a railway tunnel where no line of sight is available between positioning satellites and the satellite positioning system receiver 10 of the user device 1), or when said alternative positioning method(s) do not meet said accuracy criteria, in which case the process of Figure 3B resumes at step 60 and step 58 of Figure 3A resumes. It will be understood that the step 58 of Figure 3A may continue in the background while other positioning systems are being used. When an alternative positioning method meeting the one or more accuracy criteria is available, a position of the user device estimated by the alternative positioning method may be subsequently compared to an updated position of the user device (i.e. estimated using the method of Figures 3A and 3B, where vertical acceleration and principal direction data are processed together with an initial position) to determine one or more accuracy measurements of the updated position. If the one or more accuracy measurements of the updated position fail one or more second accuracy criteria (which may be different from the accuracy criteria mentioned above), the magnitude and/or chronological threshold values may be adjusted to improve the accuracy of subsequent position estimates of the user device using vertical acceleration and principal direction data together with an initial position. Additionally or alternatively, the estimated average stride length of the user may be updated to improve the accuracy of subsequent position estimates of the user device using vertical acceleration and principal direction data. The one or more accuracy measurements typically comprise a measurement of the distance, and optionally a measurement of the direction, from the position estimated by the alternative positioning method. In this case, the one or more second accuracy criteria comprises a threshold distance value (and optionally a threshold direction value). If the accuracy measurement is (are) less than the threshold distance (and optionally threshold direction) values, the magnitude and chronological threshold values may be left unchanged. However, if the accuracy measurement(s) is (are) greater than the threshold distance (and optionally threshold direction) values, the magnitude and chronological threshold values may be adjusted accordingly. In an exemplary embodiment, the position of the user device estimated by the alternative positioning method may first need to meet an absolute accuracy criterion as described above (e.g. be accurate to within a predetermined distance range, such as 20metres). The difference between the position estimated using the alternative positioning method and the position estimated using the method of Figures 3A and 3B is then calculated and compared against one or more threshold adjustment criteria. The threshold adjustment criteria may comprise a threshold adjustment value. The threshold adjustment value may be derived from the estimated errors in the position estimated using the method of Figures 3A and 3B and/or the estimated errors in the position estimated using the alternative positioning method. For example, the threshold adjustment value may be calculated as follows: 2 ( E o e stimated_position E o a iternative_positioning_method) where Error es timated_position is as defined above; and Error a it e rnative_positioning_method is the estimated error of the alternative positioning method. As described above, where the alternative positioning method is a satellite positioning system, a measure of Error a it e rnative_positioning_method is typically provided by the satellite positioning system. As also described above, where the alternative positioning method is (e.g. a triangulation based positioning system) based on processing signals received from electromagnetic signal sources of known position, Error a iternative_positioning_method can be estimated by, for example, the Best Candidate Set algorithm described above. If the difference is greater than or equal to the threshold adjustment value, the magnitude and/or chronological threshold values and/or the estimated average stride length of the user may need to be adjusted. However, as a final check before these parameters are adjusted, the position estimated using the alternative positioning method may need to be validated as described above prior to adjusting these parameters. If the position estimated using the alternative positioning method is validated, a comparison is made to determine whether the position estimated using the alternative positioning method or the position estimated using the method of Figures 3A and 3B is closer to the initial position. If the position estimated using the alternative positioning method is closer to the initial position, the stride length coefficient, C, may be reduced (e.g. by a fixed factor such as 0.02) so as to reduce the estimated average stride length of the user. If the position estimated using the method of Figures 3A and 3B is closer to the initial position, the stride length coefficient, C, may be increased (e.g. by a fixed factor such as 0.02) so as to increase the estimated average stride length of the user. It will be understood that similar adjustments may additionally or alternatively be made to the magnitude and/or chronological threshold values. The magnitude and/or chronological threshold values and/or the estimated average stride length of the user may be iterated over time to improve the accuracy of the updated position. To maintain continuity between chronologically adjacent time periods, if the last turning point detected in a time period is a local maximum, this local maximum reading (i.e. the magnitude and chronological index of that local maximum) may be retained in memory 12 for use in step detection in the chronologically subsequent time period. Optionally, the updated position determined by combining the initial position with the motion vectors can be used to create, update or correct a map of electromagnetic signal source (e.g. Wireless Access Point, WAP) positions. In this case, the user device 1 further comprises a wireless module operable to receive wireless signals conforming to one or more specific wireless standards (e.g. for example the Wi-Fi, Wi-Max, Bluetooth, Zigbee and/or near-field communications standards). When the user device 1 samples signals from the accelerometer 4, compass 8 and gyroscope 6, it may also sample wireless signals from (e.g. Wi-Fi, Wi-Max, Bluetooth, Zigbee and/or near-field compatible) electromagnetic signal sources which are within range of the user device 1. The user device 1 may then either store the signals and create, update or correct a map of electromagnetic signal source positions locally on the device or, more typically, the user device 1 will temporarily buffer the received electromagnetic signals before transmitting them to a server over a data communications network (such as the internet, e.g. via Wi-Fi, or a 2G, 2.5G, 3G or 4G mobile data communications network). In the latter case, the server creates, updates or corrects the map of electromagnetic signal source positions, and subsequently makes the (created, updated or corrected) map available to the user device 1. An example of determining the position of an electromagnetic signal source using triangulation is provided below with reference to Figure 5. However, it will be understood that any suitable algorithm may be employed. Figure 5 is an illustration of the process of triangulating (in 2D) the position of an electromagnetic signal source. Three different scanning sites 90, 92, 94 are shown, each representing a different location at which the user device 1 detects a signal transmitted by an electromagnetic signal source located in the approximate region 96. The electromagnetic signal source/region 96 is at a distance di, d 2 , d 3 from respective scanning sites 90, 92, 94. Each site 90, 92, 94 is surrounded by a circle representing the locus of all points at distance d n . Here, d n may be derived from any available distance measurement models. For example, the strength (power) of an electromagnetic signal received by the user device 1 may be described by following mathematical equation in free space:
_ PtGtGr 2
(4π} 7 where P r is the received signal power from the WAP, P t is a transmitted power from the electromagnetic signal source, G r and G t are receiver and transmitter antenna gains respectively, λ is a signal wavelength and d is a distance between source and receiver. This equation can also be represented in terms of propagation gain (PG) as:
I' r ( λ y
PtGtGr 4 d and in decibels form as: G^ = 201og( A . )
4nd The free space model (equations) cannot easily be applied in real world environments without modifications because of the signal propagation uncertainties. Electromagnetic signal propagation can be affected by many factors such as signal attenuations and reflections (multipath effects) from the surfaces, building types, moving objects and people, transmission frequency, antenna heights and polarisation, and so on. However, various models exist to try to model different environments and signal propagation behaviour through them to determine the distance between receiver and source. For example, there are models available to predict signal behaviour for different indoor environments. One of the indoor models is described by the following equation:
PGdB = 20 log( ) + 1 On log(d I do) + X. for d > d 0
where X, n and do are the parameters which vary with different indoor environments and which can be determined empirically. For example, the values of X, n and do for a typical hard partitioned office environment are 7.0, 3.0 and 100 respectively. User input can be provided to select types of environment and then to use specific values of the abovementioned parameters stored in memory (that were for example previous input by the user or other operator). Alternatively, if user inputs are not available, default values can be chosen from the software configuration. There are also models available for outdoor environments for example. One such model, designated as Stanford University Interim (SUI) Model, is described by the following equation:
PL = 20 \ g( 4 m/ " ) + 10wlog( ----) + Xf + Xh + s
λ do for d > do PL is described as path loss and other parameters can be processed similarly as described in for indoor models, that is (for example) either through user inputs or from software configuration. In each of the above equations, when all other parameters are known, the distance d can be readily deduced to determine distances d n . The distances d n may then be used together with location co-ordinates of sites 90, 92, 94 derived using the method of Figures 3A and 3B in the following equation: where d, is the distance, x r and y r are the x and y co-ordinates of the electromagnetic signal source and x si and y si are the x and y co-ordinates of places, where i is 1 , 2 n. Three equations are formed and solved for x and y co-ordinates of the electromagnetic signal source in region 96. These equations can be solved with any available methods such as the least squares method. As shown in Figure 5, the mapped co-ordinates for the electromagnetic signal source in region 96 are where the three circles (the loci of the estimated distances between the sites and the electromagnetic signal source) overlap. The circles may not overlap at a single point because of errors in the measurement/estimation of the distances di , d2, d 3 and possible errors in the reference (or estimated) co-ordinates of the scanning sites 90, 92, 94. However, it can be appreciated that approximate positions of electromagnetic signal sources within range of the user device 1 can be determined. Where a map of electromagnetic signal positions (or any other suitable database of electromagnetic signal source positions) is provided or created, updated or corrected as explained above, electromagnetic signals received by the user device can be used to identify the vertical position of the user device without using the vertical acceleration or principal direction data. This can be done by: processing the received electromagnetic signals to extract identification data relating to the electromagnetic signal source(s); and comparing the identification information to the map of electromagnetic signal source positions (or any other suitable database of electromagnetic signal source positions) to extract a vertical position of each of the one or more electromagnetic signal sources (e.g. which floor of a building the electromagnetic signal source is located). Alternatively, the vertical positions of the electromagnetic signal sources may simply be comprised in the data transmitted by said sources, in which case said positions can be extracted simply by signal processing. The vertical positions of electromagnetic signal sources can then be compared with a currently estimated vertical position of the user device to determine whether the user device has changed floors. The vertical position of the user device may also be updated accordingly. This method can be extended to generate reference patterns representative of floor changes made by the user device which can be used (as described above) to determine whether the user device has changed floors (for example but not exclusively in the absence of suitable electromagnetic signals being detected by the user device). Each reference pattern may be generated by identifying and storing two or more floor-change motion vectors generated in respect of validated steps taken by the user during the floor change. This may for example involve identifying the motion vectors generated from data generated by the user device during the particular time interval ending with the floor change. It will be understood that the floor-change motion vectors, or data derived therefrom, may be used as a reference pattern. Further modifications and variations may be made within the scope of the invention herein disclosed.