WO/2013/105359 | TRAVEL DISTANCE MEASUREMENT DEVICE |
JP3535806 | WALK SIGNAL GENERATING DEVICE FOR PEDOMETER |
JP2003302248 | RUNNING DISTANCE INTEGRATING METER FOR VEHICLE |
ELYASI FATEMEH (US)
MANDUCHI ROBERTO (US)
US20190128673A1 | 2019-05-02 |
CHEN CHANGHAO, LU XIAOXUAN, MARKHAM ANDREW, TRIGONI NIKI: "IONet: Learning to Cure the Curse of Drift in Inertial Odometry", PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, vol. 32, no. 1, 1 January 2018 (2018-01-01), XP093017568, ISSN: 2159-5399, DOI: 10.1609/aaai.v32i1.12102
REN PENG, ELYASI FATEMEH, MANDUCHI ROBERTO: "Smartphone-Based Inertial Odometry for Blind Walkers", SENSORS, vol. 21, no. 12, 11 June 2021 (2021-06-11), pages 4033, XP093017571, DOI: 10.3390/s21124033
WHAT IS CLAIMED IS: 1. A navigation system determining turns in a user’s trajectory, comprising: a smartphone comprising a display and one or more sensors and at least comprising or coupled to one or more processors; one or more memories; and one or more programs stored in the one or more memories, wherein the one or more programs executed by the one or more processors carry out the following acts: receiving input data, comprising orientation data and acceleration from the one or more sensors carried by a user taking steps along a trajectory; detecting a plurality of n straight sections in the trajectory, where n is an integer, each of the straight sections corresponding to the user walking along a substantially straight or linear path; generating and tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; detecting one or more turns in the trajectory, wherein each of the turns is a change in the estimated orientation of the user in the nth one of the straight sections as compared to the estimated orientation in the (n-1)th one of the straight sections; and outputting to the display, a graphical representation of the trajectory generated using the one or more turns. 2. The system of claim 1, wherein detecting the straight sections further comprises the one or more programs: storing the input data in a database; transforming the input data into trajectory detection data processable by a first machine learning module; classifying the trajectory detection data as representing motion in one of the straight sections or in a non-straight section using the first machine learning module; and at least labelling one or more values of the trajectory detection data in the database or a second database as being associated with one of the straight sections or indicating the one of the straight sections on the graphical representation on the display if the one or more values are classified by the first machine learning module as being associated with the one of the straight sections. 3. The system of claim 2, wherein the first machine learning module is trained using training data comprising at least WeAllWalk data, or the acceleration and the orientation of, pedestrians comprising blind or visually impaired persons walking using a walking aid. 4 The system of claim 2, wherein the first machine learning module comprises a GRU neural network. 5. The system of claim 2, wherein the first machine learning module comprises a recurrent neural network trained to identify, from the acceleration and the orientation data comprising an azimuthal angle, each of the straight sections comprising one or more time intervals during which the user walks regularly or substantially on a straight path. 6. The system of any of claims 1-5, wherein the first machine learning module is trained to disregard changes in the orientation resulting from the user comprising a visually impaired user stopping and rotating their body to re-orient or swerving to avoid a perceived obstacle. 7. The system of claim 1, further comprising detecting each of the straight sections after the one or more programs sample the orientation data for no more than 1 second. 8. The system of claim 1, wherein the turn by an angle of 90° is tracked as two consecutive turns by 45°. 9. The system of claim 1, wherein the one or more programs: transform coordinates of the input data into a heading agnostic reference frame, to obtain heading agnostic data; detect the steps as detected steps by associating impulses in the heading agnostic data with heel strikes using a second machine learning module; count a number of the detected steps by associating the steps with a stride length, so as to output step data; determine the trajectory using the detected steps and the turns; and the trajectory comprises: one or more step displacement vectors defined at each of the detected steps as having a first length equal to the stride length and a direction comprising an azimuthal angle obtained from the input data; and one or more turn displacement vectors defined as having a second length equal to the step length and with the direction determined from the turns detected. 10. The system of claim 9, further comprising the one or more programs: transforming the heading agnostic data into trajectory detection data processable by the second machine learning module; and at least classifying or recognize one or more values of the trajectory detection data as being associated with steps using the second machine learning module, or counting the steps using the second machine learning module. 11. The system of claim 10, wherein the second machine learning module is trained using reference trajectory data outputted from another machine learning module identifying the user’s trajectory. 12. The system of claim 10, wherein the second machine learning module comprises an LSTM neural network comprising no more than 2 layers and a hidden unit size of no more than 6. 13. The system of claim 1, wherein the trajectory is determined without reference to a map of an environment in which the user is moving. 14. The system of claim 1, wherein the one or more programs: receive a map of an environment in which the user is moving, the map identifying one or more impenetrable walls, and determine the trajectory by comparing the trajectory to the map and eliminating one or more paths in the trajectory that traverse the one or more impenetrable walls. 15. The system of claim 14, wherein the one or more programs: receive or obtain velocity vectors of the user from the input data or another source; generate posterior locations of the user from the velocity vectors using a particle filtering module; and generate a mean shift estimating locations of the user corresponding to highest modes of the posterior locations to obtain estimated locations; and generate the trajectory by linking pairs of the estimated locations that share the largest number of the highest modes. 16. A method for determining turns in a user’s trajectory, comprising: capturing input data, comprising orientation data and acceleration data from one or more sensors carried by a user taking steps along a trajectory; detecting a plurality of n straight sections in the trajectory, each of the straight sections corresponding to the user walking along a substantially straight path or a linear path; generating and tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; and detecting one or more turns in the trajectory, wherein each of the turns is a change in the estimated orientation of the user in the nth one of the straight sections as compared to the estimated orientation in the (n-1)th one of the straight sections. 17. The method of claim 16, further comprising: storing the input data in a database on a computer; transforming the input data into trajectory detection data processable by a first machine learning module; classifying the trajectory detection data as representing motion in one of the straight sections or in a non-straight section using the first machine learning module; and at least labelling one or more values of the trajectory detection data in a database as being associated with one of the straight sections or indicating the one of the straight sections on a graphical representation on a display if the one or more values are classified by the first machine learning module as being associated with the one of the straight sections. 18. The method of claim 17, further comprising training the first machine learning module for detecting at least one of the straight sections or the turns in the trajectory, comprising: collecting a set of first pedestrian data, the first pedestrian data comprising the orientation data and the acceleration data for one or more walking pedestrians; applying one or more transformations to the first pedestrian data including smoothing to create a modified set of pedestrian data; creating a first training set from the modified set, comprising labelled straight walking sections and labeled non-straight walking sections; and training the first machine learning module using the first training set to identify the straight sections in the trajectory data using the orientation data and the acceleration data, the straight sections each corresponding to the user walking along the linear path. 19. The method of claim 18, wherein the first training set comprises the modified set comprising data for the walking pedestrians comprising blind or visually impaired persons walking using a walking aid comprising at least one of a cane or a guide dog. 20. The method of claim 19, wherein the first pedestrian data comprises a WeAllWalk data set. 21. The method of claim 18, wherein: the applying of the transformations comprises removing data, from the first pedestrian data, associated with a single 45° turn or 90° turns associated with a 45° turn; and the training comprises training the first machine learning module, or another machine learning module, to detect, identify, or classify the turns in the trajectory comprising 90° turns. 22. The method of claim 18, wherein: the applying the transformations comprises converting the first pedestrian data corresponding to 90° turns into two consecutive 45° turns; and the training comprises training the first machine learning module or another machine learning module, to detect, identify, or classify the turns comprising the 45° turns. 23. The method of claim 16 or 18, further comprising: creating a second training set from the modified set, the second training set comprising orientation turns between adjacent ones of the straight sections; training the first machine learning module, or another machine learning module, to detect, classify, or identify the turns in the trajectory using the second training set. 24. The method of claim 23, further comprising: creating a third training set comprising ground truth turns obtained from a database comprising ground truth data associated with the trajectory; training the first machine learning module, or the another machine learning module using the ground truth turns, to detect, identify, or classify the turns in the trajectory using the third training set. 25. The method of claim 16 or 18, further comprising training a second machine learning module for detecting the steps in the trajectory, comprising: collecting a set of second pedestrian data comprising the acceleration data in three orthogonal directions and a rotation rate of a smartphone (or one or more sensors carried by the pedestrians and coupled to the smartphone) in each of the three orthogonal directions, wherein the acceleration is associated with steps taken by the pedestrian along the trajectory; applying one or more second transformations to the second pedestrian data, comprising transforming coordinates of the second pedestrian data into a heading agnostic reference frame, to obtain heading agnostic data; creating a second training set from the heading agnostic data, wherein steps are associated with: impulses identified using the acceleration data, the impulses corresponding to heel strikes of the walking pedestrian, in the heading agnostic data; and a stride length of the pedestrian; and training the second machine learning module using the second training set to at least identify or count a number of the steps in the trajectory by associating the steps with the impulses. 26. The method of claim 25, further comprising: creating a third training set comprising ground truth step data; and training the second machine learning module to detect the steps using the third training set. 27. The method of claim 16 or 25, further comprising training a third machine learning module for mapping a trajectory, comprising: collecting a set of third pedestrian data comprising the trajectory mapped using the first machine learning module and the second machine learning module; creating a third training set comprising ground truth data comprising waypoint time stamps; and training the third machine learning module to determine or map the trajectory by comparing a degree of alignment of the trajectory (mapped using the first machine learning module and the second machine learning module) with a ground truth path passing through the waypoint time stamps. |
5.4. Example Evaluation Metrics Each reconstructed trajectory is compared with the ground-truth path. WeAllWalk does not provide detailed information of the walkers’ location at each time, but only the timestamps indicating when a walker reached individual waypoints. We will make the simplifying assumption that walkers were located in the middle of the corridor width when transitioning between segments through each such waypoint. Given a trajectory estimated for a walker in a path, , we use the locations at the waypoint timestamps, to first align this trajectory with the ground- truth path. This standard procedure [78] is necessary because the reference frame used to represent the trajectory is undefined. Specifically, we find (using Procrustes analysis) the rotation and translation that minimizes the mean squared distance between the locations (the ground-truth location of the waypoints) an . After alignment, we evaluate the goodness of the estimated path using three different metrics. The first metric considered is the RMSE of estimated waypoint locations: ) where is the number of waypoints in the path. For the remaining metrics, r a sampling of the estimated trajectory into points with a uniform inter-sample distance of 1 meter is considered. Likewise, the segments joining consecutive waypoints at intervals of 1 meter were sampled, resulting i points representing the path. We then compute the Haussdorff distance between these two sets of points, as well as their (weighted) average Haussdorff distance [79]: 5) 6) The Haussdorff distance penalizes any large (possibly episodic) discrepancy between the estimated trajectory and the ground truth. The average Haussdorff is a more lenient measure, that penalizes consistent biases. These two metrics allow us to evaluate the goodness of the estimated trajectory in its entirety (and not just at waypoints). FIG. 7 illustrates the three considered metrics. 6. Path Reconstruction 6.1 Map-less path reconstruction The following algorithms for map-less path reconstruction (see FIG. 8) were considered: x Azimuth/Steps (A/S): At each detected step, a displacement vector is defined with length equal to the step stride, and with orientation equal to the azimuth angle as provided by the phone; x – Turns/Steps (T/S): At each detected step, a displacement vector is defined with length equal to the step stride, and with orientation equal to the output of our two-stage or turn detection method; x RoNIN (R) – Fine-tuned RoNIN (FR). Reconstruction errors are shown for the three considered metrics in Table 3. Note that fine-tuned RoNIN was only considered for blind walkers (LC and DG). Examples of map-less path reconstruction using fine-tuned RoNIN, Azimuth/Steps, and 90 o Turns/Steps, are shown in FIG. 9. Although the reconstruction errors are computed after alignment as described in Section 4.4.1, the paths have been re- aligned on the underlying map in the FIG. for easy comparison with the map-assisted reconstructed paths. Table 3. Reconstruction errors (RMSEwp, Hauss, avHauss) using the map- less path reconstruction algorithms described in Section 4.4.2. Units of meters. For each community of walkers (S, LC, DG), the smallest error values for each metric are shown in boldface. 6.2 Map-Assisted Path Reconstruction The particle filter was fed with data generated by three different algorithms: Azimuth/Steps (A/S), RoNIN, and, for blind walkers, fine-tuned RoNIN. The Turns/Steps algorithm was shown to give comparatively poor results in this case. Table 4 shows the results, for the three metrics considered, using particle filtering (PF), as well as particle filtering with mean shift mode selection in the “instantaneous” mode (PF-MS) and in the “global” mode (PF-MS-G; see Section 3.4). Sample reconstructed paths are shown in FIG. 9. 7. Example Results 7.1. Step Counting Results The data in Table 1 and the curves in FIG. 6 clearly show how step counting accuracy is affected by the community of walkers used to train the system. For example, when testing with long cane users a step counter trained on sighted walkers (TS:LC), the sum of undercount and overcount rates was found to be 13.09% (SC- Error 1) or 8.33% (SC-Error 2). However, when the system was trained only with other long cane users (TC:LC), these numbers reduce to 6.49% and 1.87%, respectively. Similar observations can be drawn from the tests with dog guide users, for whom the best results were obtained when training on all available data (TA:DG). A possible cause for the worse performance of TC:DG with respect to TA:DG is the small number of dog guide users in WeAllWalk (only two users in the training set of each cross-validation round). The average threshold on the output of the LSTM as learned from within-community training data is substantially larger for sighted walkers (0.78) than for dog guide (0.68) or long cane users (0.55). Larger thresholds should be expected when the output of the LSTM is closer to the binary signal used to indicate heel strikes. This suggests that the LSTM is better capable of modeling the desired output for sighted walkers (possibly due to their more regular gait) than for blind walkers. The average stride lengths learnt within-community are also larger for sighted walkers (0.74 m) than for dog guide users (0.62 m) or long cane users (0.55 m). This is not surprising considering that, in general, dog guide users walk faster and more confidently than long cane users, as they do not need to probe the space ahead with the cane and can rely on their dog guide to lead them along a safe trajectory. Comparison with [16], which also presents results for various step counting algorithms as applied to WeAllWalk, shows that use of a LSTM leads to improved results. For example, the lowest value for SC-Error 1 (measured as the sum of UC and OC rates) for long cane users was found to be 7.8% in [16] (vs. 6.5% with our system, see Table 1). For the same community of users, the minimum SC-Error 2 found in [16] was 4.8%, vs. 1.9% with our system. Table 4 (see Fig. 14). Reconstruction errors (RMSE wp , Hauss, avHauss) using the map-assisted path reconstruction algorithms described in Section 5. Units of meters. For each community of walkers (S, LC, DG), the smallest error values for each metric are shown in boldface. 7.2. Turn Detection Results Remarkably, Table 2 shows no undercounts or overcounts for sighted walkers (TC:S). This suggests that these participants tended to walk on straight lines, without large sway patterns that could generate false positives. Errors were generally higher for the than for the turn detectors. These results should be evaluated keeping in mind that even a single missed turn, or a single false positive, could potentially lead to large path reconstruction errors. For long cane users, training with all available data (TA:LC) gave substantially better results than training only with data from sighted walkers (TS:LC). No such large discrepancies across training modalities were observed when testing with dog guide users, for whom the best results were obtained for within-community training (TC:DG). These results are vastly superior than those observed in [16], where turns were computed on WeAllWalk data using the algorithm described in [38]. In that case, the accumulated error (UC rate + OC rate) TD- Error was found to exceed 50%. 7.3. Map-less Path Reconstruction Results The data in Table 3 shows that the smallest reconstruction errors were measured for the Turns/Steps algorithm (although for dog guide users, a similar small error for the avHuass metric was obtained with fine-tuned RoNIN). The Turns/Steps algorithm performed only marginally worse than the case. Remarkably, the best training modality for long cane users (within-community training, TC:LC), gives similar error as the best case for sighted walkers. However, when testing with long cane user using a system trained with sighted walkers, very poor results were obtained. For example, the RMSE wp error for the Turns/Steps system, which is 3.46 m for within-community training (TC:LC), jumps to 9.03 m when training on sighted walkers (TS:LC). For dog guide users, the best results were obtained by training over all available data. Both RoNIN and the simpler Azimuth/Steps algorithms are affected by drift, and produced comparable results. One notable exception is for long cane users when the system was trained with sighted walkers (TS:LC). In this case, RoNIN gave substantially better results than all other methods (though still worse than TC:LC). A likely reason for this can be found in the different average stride length between sighted and long cane users (see Section 7.1), which may cause incorrect reconstructed path lengths for TS:LC. RoNIN, which does not rely on stride lengths, may provide more accurate velocity measurements in this case. The data also shows that fine-tuning the RoNIN network did not result in improved path reconstruction performance. 7.4. Map-assisted Path Reconstruction Results The best results for the community of blind walkers were obtained with the azimuth/steps (A/S) algorithm, processed by the PF-MS-G filter. For the sighted walkers, the best results were obtained with RoNIN, still processed by PF-MS-G, although these last results were only marginally better than using the A/S algorithm. It appears that the strong wall impenetrability constraint was able to limit the effect of azimuth drift. In general, errors for map-assisted reconstruction were substantially lower than for the map-less case. As in the prior cases, training the system over sighted walkers was shown to give poor results when tested with long cane users and, to a lesser extent, with dog guide users. Although the best results were obtained with the PF-MS-G algorithm, use of the mean shift clustering was not shown to give consistently better results overall. FIG. 9 provides some insight into the behavior of the reconstruction algorithms. In (A), both original paths (A/S, fine-tuned RoNIN) were grossly incorrect due to orientation drift. Particle filtering tracked and removed drift, and correctly reconstructed the paths. In the case of FIG. 9B, poor velocity estimation led to reconstructed segments that were too short (Azimuth/Steps) or too long (fine-tuned RoNIN). In both cases, particle filtering found incorrect paths through open doors. PF-MF-G was able to correctly reconstruct most of the path for the Azimuth/Steps case, but not for the fine-tuned RoNIN case. The trajectory in output of either system in the case of FIG. 9C was affected by both drift and incorrect velocity estimation. Although particle filtering was for the most part able to successfully correct the trajectory in the Azimuth/Steps case, it produced a poor reconstruction for fine-tuned RoNIN. We note that the set of path reconstruction algorithms considered in this work includes inertial-based odometry algorithms used in research for blind walkers. For example, [68] used a PDR-based on step counting coupled with orientation, while [67] additionally implemented a particle filter. 7.5 Voice assisted navigation For the map-less case, in which the strong constraint of wall impassibility cannot be relied on, a simple turn or segment path representation was attempted. This is appropriate for buildings with corridors that intersect at discrete turning angles at multiples of 45° or 90°. Besides providing a strong spatial prior which can be used to compensate for drift, this representation is particularly useful for the verbal description of a path. As an example, a path could be represented as “walk straight for 80 steps, then make a right turn, walk straight for 50 more steps, make a left turn, then after 40 steps you will reach your destination”. It is important to note that for a turn or segment representation to be successful, turns must be detected robustly, which may be challenging in some situations. For example, a blind walker may stop and turn around to get their bearings, or to listen to a sound that may help with orientation, something that could mistakenly be interpreted by the system as a path turn [18]. In one or more embodiments, such turns are disregarded by the system because the machine learning algorithm determining straight sections and the turn detector are trained to disregard turns that are not between consecutive straight sections. 8. Advantages and Improvements The present disclosure includes a detailed study on the use of two odometry systems: a PDR based on step counts and orientation estimation, and a deep learning algorithm (RoNIN), to reconstruct paths taken by blind walkers using a long cane or a dog guide, as represented in the WeAllWalk dataset. For the map-less case, a two- stage system capable of robustly detecting turns (e.g., at multiples of or degrees) combined with an RNN-based step counter with learned fixed stride length was introduced. For the map-assisted case, a standard particle filtering algorithm and a posterior distribution mode identification module were employed. Compared with work that explored inertial odometry for use by blind pedestrians (e.g., [18,67,68]), the disclosed study includes a variety of algorithms for both the map-assisted and map-less case, and reports the results of extensive quantitative evaluations using appropriate metrics on a representative data set with blind walkers (WeAllWalk). Although it is possible that other sensing modalities (e.g., visual odometry [63] or BLE-based positioning [65]) may achieve higher localization accuracy, inertial odometry has clear practical advantages, as it requires no additional infrastructure, and may function even if the user keeps the smartphone in their pocket while walking. These Examples have shown that, for the same algorithm, the choice of the community of walkers used for training the algorithm’s parameters is important. Systems trained with sighted walkers consistently gave poor results when tested with long cane users and, to a lesser extent, with dog guide users. However, when the training set contained data collected from these communities, results improved substantially, and were in fact comparable to the best results obtained when testing with sighted walkers. Our results also showed that our Turns/Steps PDR produced better results than the more sophisticated RoNIN in the map-less case, even when the latter was fine-tuned to better model walking patterns of blind individuals. For the map-assisted case, the best results were found when the particle filter was fed with data from the Azimuth/Steps algorithm. It should be noted, however, that participants in WeAllWalk kept their smartphones in a fixed location on their garments. Had they changed the orientation of their phone (e.g., to pick up a call, or by repositioning the phone in a different pocket), it is likely that this change in phone orientation would have negatively affected the results. In these situations, an algorithm such as RoNIN, which was trained to correctly identify the user velocity independently of the phone orientation, possibly combined with a mechanism to reduce the effect of drift, could provide more robust position tracking. The choice of the minimum turn angle for a Turn/Steps PDR depends on the specific environment considered. Although the layout of most buildings results from corridor networks intersecting at or degrees, there may be situations that call for a finer angular interval. This could be achieved by increasing the cardinality of the state tracked by the MKF (Section 2.2). Mitigation procedures similar to those described herein could be used to reduce the detection of false positives resulting from the finer angular resolution. A main technical problem in the art that may be overcome using example systems described herein is that of "orientation drift", in which the orientation of the walker as an error that increases with time, due to bias in the gyro data. Embodiments of the present invention also correct for biases in path length estimation, enables more accurate pedestrian tracking, and can be customized to particular types of walkers (e.g., blind people walking with the help of a long cane.) Hardware Environment FIG. 10 is an exemplary hardware and software environment 1000 (referred to as a computer-implemented system and/or computer-implemented method) used to implement one or more embodiments of the invention. The hardware and software environment includes a computer 1002 and may include peripherals. Computer 1002 may be a user/client computer, server computer, or may be a database computer. The computer 1002 comprises a hardware processor 1004A and/or a special purpose hardware processor 1004B (hereinafter alternatively collectively referred to as processor 1004) and a memory 1006, such as random access memory (RAM). The computer 1002 may be coupled to, and/or integrated with, other devices, including input/output (I/O) devices such as a keyboard 1014, a cursor control device 1016 (e.g., a mouse, a pointing device, pen and tablet, touch screen, multi-touch device, etc.) and a printer 1028. In one or more embodiments, computer 1002 may be coupled to, or may comprise, a portable or media viewing/listening device 1032 (e.g., an MP3 player, IPOD, NOOK, portable digital video player, cellular device, personal digital assistant, etc.). In yet another embodiment, the computer 1002 may comprise a multi-touch device, mobile phone, gaming system, internet enabled television, television set top box, or other internet enabled device executing on various platforms and operating systems. In one embodiment, the computer 1002 operates by the hardware processor 1004A performing instructions defined by the computer program 1010 under control of an operating system 1008. The computer program 1010 and/or the operating system 1008 may be stored in the memory 1006 and may interface with the user and/or other devices to accept input and commands and, based on such input and commands and the instructions defined by the computer program 1010 and operating system 1008, to provide output and results. Output/results may be presented on the display 1022 or provided to another device for presentation or further processing or action. In one embodiment, the display 1022 comprises a liquid crystal display (LCD) having a plurality of separately addressable liquid crystals. Alternatively, the display 1022 may comprise a light emitting diode (LED) display having clusters of red, green and blue diodes driven together to form full-color pixels. Each liquid crystal or pixel of the display 1022 changes to an opaque or translucent state to form a part of the image on the display in response to the data or information generated by the processor 1004 from the application of the instructions of the computer program 1010 and/or operating system 1008 to the input and commands. The image may be provided through a graphical user interface (GUI) module 1018. Although the GUI module 1018 is depicted as a separate module, the instructions performing the GUI functions can be resident or distributed in the operating system 1008, the computer program 1010, or implemented with special purpose memory and processors. In one or more embodiments, the display 1022 is integrated with/into the computer 1002 and comprises a multi-touch device having a touch sensing surface (e.g., track pod or touch screen) with the ability to recognize the presence of two or more points of contact with the surface. Examples of multi-touch devices include mobile devices (e.g., IPHONE, NEXUS S, DROID devices, etc.), tablet computers (e.g., IPAD, HP TOUCHPAD, SURFACE Devices, etc.), portable/handheld game/music/video player/console devices (e.g., IPOD TOUCH, MP3 players, NINTENDO SWITCH, PLAYSTATION PORTABLE, etc.), touch tables, and walls (e.g., where an image is projected through acrylic and/or glass, and the image is then backlit with LEDs). Some or all of the operations performed by the computer 1002 according to the computer program 1010 instructions may be implemented in a special purpose processor 1004B. In this embodiment, some or all of the computer program 1010 instructions may be implemented via firmware instructions stored in a read only memory (ROM), a programmable read only memory (PROM) or flash memory within the special purpose processor 1004B or in memory 1006. The special purpose processor 1004B may also be hardwired through circuit design to perform some or all of the operations to implement the present invention. Further, the special purpose processor 1004B may be a hybrid processor, which includes dedicated circuitry for performing a subset of functions, and other circuits for performing more general functions such as responding to computer program 1010 instructions. In one embodiment, the special purpose processor 1004B is an application specific integrated circuit (ASIC) or field programmable gate array. The computer 1002 may also implement a compiler 1012 that allows an application or computer program 1010 written in a programming language such as C, C++, Assembly, SQL, PYTHON, PROLOG, MATLAB, RUBY, RAILS, HASKELL, or other language to be translated into processor 1004 readable code. Alternatively, the compiler 1012 may be an interpreter that executes instructions/source code directly, translates source code into an intermediate representation that is executed, or that executes stored precompiled code. Such source code may be written in a variety of programming languages such as JAVA, JAVASCRIPT, PERL, BASIC, etc. After completion, the application or computer program 1010 accesses and manipulates data accepted from I/O devices and stored in the memory 1006 of the computer 1002 using the relationships and logic that were generated using the compiler 1012. The computer 1002 also optionally comprises an external communication device such as a modem, satellite link, Ethernet card, or other device for accepting input from, and providing output to, other computers 1002. In one embodiment, instructions implementing the operating system 1008, the computer program 1010, and the compiler 1012 are tangibly embodied in a non- transitory computer-readable medium, e.g., data storage device 1020, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 1024, hard drive, CD-ROM drive, tape drive, etc. Further, the operating system 1008 and the computer program 1010 are comprised of computer program 1010 instructions which, when accessed, read and executed by the computer 1002, cause the computer 1002 to perform the steps necessary to implement and/or use the present invention (e.g., turn detector, step detector, machine learning, neural networks) or to load the program of instructions into a memory 1006, thus creating a special purpose data structure causing the computer 1002 to operate as a specially programmed computer executing the method steps described herein. Computer program 1010 and/or operating instructions may also be tangibly embodied in memory 1006 and/or data communications devices 1030, thereby making a computer program product or article of manufacture according to the invention. As such, the terms “article of manufacture,” “program storage device,” and “computer program product,” as used herein, are intended to encompass a computer program accessible from any computer readable device or media. Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with the computer 1002. FIG. 11 schematically illustrates a typical distributed/cloud-based computer system 1100 using a network 1104 to connect client computers 1102 to server computers 1106. A typical combination of resources may include a network 1104 comprising the Internet, LANs (local area networks), WANs (wide area networks), SNA (systems network architecture) networks, or the like, clients 1102 that are personal computers or workstations (as set forth in FIG. 10), and servers 1106 that are personal computers, workstations, minicomputers, or mainframes (as set forth in FIG. 10). However, it may be noted that different networks such as a cellular network (e.g., GSM [global system for mobile communications] or otherwise), a satellite based network, or any other type of network may be used to connect clients 1102 and servers 1106 in accordance with embodiments of the invention. A network 1104 such as the Internet connects clients 1102 to server computers 1106. Network 1104 may utilize ethernet, coaxial cable, wireless communications, radio frequency (RF), etc. to connect and provide the communication between clients 1102 and servers 1106. Further, in a cloud-based computing system, resources (e.g., storage, processors, applications, memory, infrastructure, etc.) in clients 1102 and server computers 1106 may be shared by clients 1102, server computers 1106, and users across one or more networks. Resources may be shared by multiple users and can be dynamically reallocated per demand. In this regard, cloud computing may be referred to as a model for enabling access to a shared pool of configurable computing resources. Clients 1102 may execute a client application or web browser and communicate with server computers 1106 executing web servers 1110. Such a web browser is typically a program such as MICROSOFT INTERNET EXPLORER/EDGE, MOZILLA FIREFOX, OPERA, APPLE SAFARI, GOOGLE CHROME, etc. Further, the software executing on clients 1102 may be downloaded from server computer 1106 to client computers 1102 and installed as a plug-in or ACTIVEX control of a web browser. Accordingly, clients 1102 may utilize ACTIVEX components/component object model (COM) or distributed COM (DCOM) components to provide a user interface on a display of client 1102. The web server 1110 is typically a program such as MICROSOFT’S INTERNET INFORMATION SERVER. Web server 1110 may host an Active Server Page (ASP) or Internet Server Application Programming Interface (ISAPI) application 1112, which may be executing scripts. The scripts invoke objects that execute business logic (referred to as business objects). The business objects then manipulate data in database 1116 through a database management system (DBMS) 1114. Alternatively, database 1116 may be part of, or connected directly to, client 1102 instead of communicating/obtaining the information from database 1116 across network 1104. When a developer encapsulates the business functionality into objects, the system may be referred to as a component object model (COM) system. Accordingly, the scripts executing on web server 1110 (and/or application 1112) invoke COM objects that implement the business logic. Further, server 1106 may utilize MICROSOFT’S TRANSACTION SERVER (MTS) to access required data stored in database 1116 via an interface such as ADO (Active Data Objects), OLE DB (Object Linking and Embedding DataBase), or ODBC (Open DataBase Connectivity). Generally, these components 1100-1116 all comprise logic and/or data that is embodied in/or retrievable from device, medium, signal, or carrier, e.g., a data storage device, a data communications device, a remote computer or device coupled to the computer via a network or via another data communications device, etc. Moreover, this logic and/or data, when read, executed, and/or interpreted, results in the steps necessary to implement and/or use the present invention being performed. Although the terms “user computer”, “client computer”, and/or “server computer” are referred to herein, it is understood that such computers 1102 and 1106 may be interchangeable and may further include thin client devices with limited or full processing capabilities, portable devices such as cell phones, notebook computers, pocket computers, multi-touch devices, and/or any other devices with suitable processing, communication, and input/output capability. Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with computers 1102 and 1106. Embodiments of the invention are implemented as a software/application on a client 1102 or server computer 1106. Further, as described above, the client 1102 or server computer 1106 may comprise a thin client device or a portable device that has a multi-touch-based display. In one or more examples, the one or more processors, memories, and/or computer executable instructions are specially designed, configured or programmed for performing machine learning. The computer program instructions may include a detection/pattern matching component for pattern recognition or detection (e.g., straight section recognition/detection, turn detection, trajectory identification) or applying a machine learning model (e.g., for analyzing data or training data input from a data store to perform machine learning algorithms described herein). In one or more examples, the processors may comprise a logical circuit for performing pattern matching or recognition, or for applying a machine learning model for analyzing data or train data input from a memory/data store or other device. Data store/memory may include a database, e.g., storing various training sets. In some examples, the pattern matching model applied by the pattern matching logical circuit may be a machine learning model, such as a convolutional neural network, a logistic regression, a decision tree, recurrent neural network, or other machine learning model. In one or more examples, the logical circuit comprises a straight section circuit, a turn detector circuit, and/or a trajectory reconstruction logical circuit. Process Steps Method for tracking Fig. 12 is a flowchart illustrating a computer implemented method for determining a user’s trajectory. Block 1200 represents receiving input data, comprising acceleration in 3 orthogonal directions, orientation data (including azimuth angle), and rotation rate in the three orthogonal directions, from one or more inertial sensors carried by a user taking steps comprising heel strikes along a trajectory. Block 1202 represents transforming the input data. In one or more examples, the step comprises transforming coordinates of the input data into a heading agnostic reference frame, to obtain heading agnostic data. Block 1204 represents identifying turns. The step comprises analyzing the input data using a first trained machine learning algorithm so as to detect a plurality of n straight sections when the user is walking along an approximately straight path; implementing an orientation tracker tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; implementing a turn detector detecting each of the turns, wherein each of the turns is a change in the estimated orientation of the user in the n th straight section as compared to the estimated orientation in the (n-1) th straight section. Block 1206 represents using a second trained machine learning algorithm to detect the steps by associating impulses in the heading agnostic data with the heel strikes. Block 1208 represents counting a number of the detected steps by associating the steps with a stride length, so as to output step data. Block 1210 represents optionally associating the step data and turns with a map. Block 1212 optionally performing particle filtering. Block 1214 represents determining the trajectory using the step data and the turns. Method of making and training Fig. 13 illustrates a method of training machine learning algorithms to count steps, detect turns, and construct a path or trajectory so as to obtain a computer implemented system useful for determining a user’s trajectory. Block 1300 represents obtaining one or more processors; one or more memories; and one or more computer executable instructions stored in the one or more memories, wherein the computer executable instructions are executed by the one or more processors. The computer executable instructions implement the machine learning algorithms as described herein. Block 1302 illustrates receiving or creating, in the computer, training data comprising ground truth data comprising ground truth step data (e.g., number of steps) and ground truth turn data (e.g., number and location of turns). In one or more examples, the training data comprises WeAllWalk data. Blocks 1304A-1304B represents training: (1) the first machine learning algorithm (Block 1304A), using the ground truth turn data, to detect the turns using the acceleration and orientation data. In one or more examples, the training comprises using one or more error metrics (e.g., the Loewenstein metric) to compare the turn data determined from the first machine learning algorithm, with the ground truth turn data.. In one or more examples, the first machine learning algorithm is trained to detect the 90 degree turns by not including the turns comprising 45 degree turns or 90 degree turns associated with a 45 degree turn. In yet further examples, training the first machine learning algorithm to detect 45 degree turns comprises converting 90 degree turns into two consecutive 45 degree turns. (2) the second machine learning algorithm (Block 1304B), using the ground truth step data, to count the steps using the acceleration and rotation rate data obtained from the inertial sensors (e.g., on a smart phone). In one or more examples, the training comprises using one or more error metrics to compare the step data determined from the second machine learning algorithm, with the ground truth step data. Block 1306 represents optionally training a third machine learning algorithm, using ground truth training data comprising waypoint time stamps, to track the trajectory of the user using the step data and the turn data determined from the acceleration and rotation rate data (e.g., outputted from inertial sensors on a smartphone). In one or more examples, the training comprises using an error metric to compare a degree of alignment of the trajectory determined by the third machine algorithm with a ground truth path passing through the waypoint time stamps. In one or more examples, the training data comprises WeAllWalk data. Block 1308 represents a computer implemented system implementing trained machine learning algorithms or artificial intelligence. In one or more examples, the step further comprises coupling the system to a navigation system and a smartphone. Illustrative examples of the system include, but are not limited to, the following (referring to Figs. 1-15). 1. Fig. 3, 8, 9,10, 15 illustrates an example computer implemented system 1000 useful for determining turns in a user’s trajectory 800, comprising: one or more processors 1004; one or more memories 1006; and one or more programs stored in the one or more memories, wherein the one or more programs 1010 executed by the one or more processors: receive input data, comprising orientation data and acceleration from one or more sensors carried by a user 306 taking steps along a trajectory; analyze the input data 204 using a first trained machine learning algorithm 1500 or straight walking detector 202 so as to detect a plurality of n straight sections 802 when the user is walking along an approximately straight path; comprise an orientation tracker 200 tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; and comprise a turn detector 1500, 850 detecting each of the turns 804, wherein each of the turns is a change in the estimated orientation of the user in the n th straight section 802a as compared to the estimated orientation in the (n-1) tph straight section 802b (e.g., orientation is azimuth angle ^^RU^angle in degrees between the nth straight section and the (n-1)th straight section, where n is an integer). 2. The system of example 1, wherein the one or more programs output a trajectory of the user using the one or more turns detected by the turn detector. 3. The system of examples 1 or 2, wherein: the first trained machine learning algorithm comprises a recurrent neural network 1500 trained to identify, from the acceleration and the orientation data comprising azimuthal angle, one or more time intervals during which the user walks regularly on a straight path. 4. Fig. 3 illustrates an example of the system of any of the examples 1-3, wherein the first trained machine learning algorithm 1500 is trained to disregard changes in the orientation resulting from the user comprising a visually impaired user stopping and rotating their body for re-orientation purposes, swerving to avoid a perceived obstacle, or mistakenly veering off a straight path. 5. The system of any of the examples 1-4, wherein the orientation tracker: assigns the orientation data comprising an azimuthal angle ^^W^^DW^HDFK^WLPH^W^ to one of N fixed orientations , to form a sequence of orientations, models the sequence of the orientations O k as a Markov chain; PRGHOV^D^GULIW^G^W^^LQ^WKH^^^W) as a random walk with additional Gaussian white noise; uses a Mixture Kalman Filter (MKF) algorithm to compute a posterior distribution of discrete orientations and of drift at each time, where represents a sequence of measurements of the azimuth angle before time a normal distribution; represents the prior probability that the user’s orientation does not change in two consecutive time instants; wherein the MKF algorithm forms the posterior distribution comprising a list of a plurality of the sequences of the orientations, a Kalman filter is associated with each of the sequences; and the MKF algorithm adds, at each time instant, a new orientation to the sequence by sampling from the posterior distribution ; and the one or more programs assign the estimated orientation at time t as the maximizer of . 6. The system of example 5, wherein: the parameters , and are learned from training data by finding a minimum of a weighted sum of an overcount rate and an undercount rate of the turns, giving larger weight to the undercount rate. 7. The system of any of the examples 5-6, wherein N = 8 or 4 (orientations at multiples of 45° or 90°, respectively) and a turn by is tracked as two consecutive turns by . 8. Fig. 3 illustrates an example of the system of any of the examples 1-7, wherein portions at the beginning and end of each straight section 802 when the user 306 is known to be standing still are labeled, during training of the first machine learning algorithm, as non-straight walking sections 304. 9. The system of any of the examples 1-8, wherein first machine learning algorithm 1500 identifies each of the straight walking sections 302 after sampling the orientation data for no more than 1 second. 10. The system of any of the examples 1-9, wherein the orientation tracker 200 mitigates for tracking delays causing detection of the turns 804 during the straight sections 802, by comparing the mode of the orientation in the n th straight section 802a with mode of the orientation in (n-1) to straight section 802b. 11. Fig. 1A and Fig. 8 further illustrates an example of the system of any of the examples 1-10, wherein: the input data comprises the acceleration in three orthogonal directions (e.g., x, y and z cartesian directions) and rotation rate (e.g., about the x, y, or z axis) in the three orthogonal directions, and the steps comprise heel strikes along the trajectory, and one or more of the programs: transform coordinates of the input data 204 into a heading agnostic reference frame, to obtain heading agnostic data; use a second trained machine learning algorithm 1500 to detect the steps 806 by associating impulses 100 in the heading agnostic data with the heel strikes; count a number of the detected steps 806 by associating the steps with a stride length 808, so as to output step data 102; and determine the trajectory using steps detected using the second trained machine learning algorithm and the turns detected using the turn detector. 12. The system of example 11, wherein the trajectory comprises: one or more step displacement vectors 810 defined at each of the detected steps as having a length equal to the stride length and a direction comprising an azimuthal angle obtained from the input data; and one or more turn displacement vectors 812 defined as having a length equal to the stride length and with the direction determined from the turn data. 13. Fig. 8 illustrates an example of the system of example 12, wherein the one or more programs determine or track the trajectory by comparing and aligning the step data and the turns with trajectory data 814 outputted from another machine learning algorithm (e.g., Ronin neural network) analyzing or determining or tracking the user’s trajectory. 14. The system of any of the examples 1-13, wherein the trajectory is determined without reference to a map of the environment in which the user is moving. 15. Fig. 5 illustrates an example of the system of any of the examples 1-13, wherein the one or more programs: receive a map 500 of an environment in which the user is moving, the map identifying one or more impenetrable walls 502, and determine the trajectory by comparing the trajectory to the map and eliminating one or more paths 504 from a trajectory that traverses the impenetrable walls. 16. The system of example 15, wherein the one or more programs: receive or obtain velocity vectors of the user from the input data or another source; apply a particle filtering algorithm to the velocity vectors to output posterior locations of the user; and apply a mean shift algorithm to the posterior locations to estimate locations of the user as the highest modes of the posterior locations. 17. The system of example 16, wherein the trajectory is determined by linking pairs of the estimated locations that share the largest number of modes. 18. The system of any of the examples 1-17, wherein the machine learning algorithms are trained using training data comprising the acceleration, orientation data, and the rotation rate of the users comprising blind or visually impaired persons using walking aids (e.g., a cane or a dog). 19. The system of any of the examples 1-18, further comprising: the first trained machine learning algorithm 1500 trained: using the ground truth data comprising ground truth turn data and one or more first error metrics comparing the turns determined from the first trained machine learning algorithm with the ground truth turn data; to detect the turns 804 comprising 90 degree turns using training data not including the turns 804 comprising 45 degree turns or 90 degree turns associated with a 45 degree turn; and to detect the turns comprising (804) 45 degree turns using the training data wherein 90 degree turns are converted into the turns comprising two consecutive 45 degree turns; the second trained machine learning algorithm 1500 trained using ground truth data comprising ground truth step data and one or more second error metrics comparing the step data determined from the second trained machine learning algorithm with the ground truth step data. 20. The system of example 19, wherein the first error metric for the turns utilizes the Ellenstein distance. 21. The system of any of the examples 1-21, further comprising the one or more programs determining the trajectory using a third trained machine learning algorithm 1500 trained using ground truth data comprising waypoint time stamps and one or more error metrics, the one or more error metrics comparing a degree of alignment of the trajectory with a ground truth path passing through the waypoint time stamps. 22. The system of any of the examples 1-21, wherein the machine learning algorithms 1500 are trained using WeAllWalk data. 23. The system of any of the examples 1-22, wherein the second trained machine learning algorithm 1500 utilizes an LSTM (Long Short Term Memory) neural network comprising no more than 2 layers and a hidden unit size of no more than 6. 24 The system of any of the examples 1-23, wherein the first trained machine learning network utilizes a GRU (Gated Recurrent Unit) neural network. 25. The system of any of the examples 1-24, wherein the machine learning is trained for the user walking indoors. 27. A navigation system comprising the system of any of the examples 1- 26. 28. Fig. 8 and Fig. 10 illustrates a navigation system 1000 determining turns in a user’s trajectory, comprising: a smartphone 1032 comprising a display 1022 and one or more sensors 1030 and at least comprising or coupled to one or more processors 1004A, 1004B; one or more memories 1006; and one or more programs 1010 stored in the one or more memories, wherein the one or more programs executed by the one or more processors: receive input data X, 204, comprising orientation data and acceleration from the one or more sensors carried by a user taking steps along a trajectory 800; detect a plurality of n straight sections 802 in the trajectory, where n is an integer, each of the straight sections corresponding to the user 306 walking along a substantially straight or linear path; generate and track an orientation (azimuth ^^^of the user in each of the n straight sections 802, wherein the orientation comprises an estimated orientation taking into account drift of the input data 204 outputted from the one or more sensors; detect one or more turns 804 in the trajectory, wherein each of the turns 804 is a change in the estimated orientation of the user in the n th one of the straight sections 802a as compared to the estimated orientation in the (n-1) th one of the straight sections 802b; and outputting, to the display 1022, a graphical representation 801 of the trajectory 800 generated using the one or more turns 804. 29. Fig. 3 and Fig. 11 illustrate an example of the system of example 28, wherein detecting the straight sections 802 further comprises the one or more programs: storing the input data in a database 1116; transforming the input data into trajectory detection data (e.g., vectors, feature vectors X) processable by a first machine learning module 1500; classifying the trajectory detection data as representing motion in one of the straight sections 802, 302 (SW) or in a non-straight section (non-SW) 304 using the first machine learning module 1500; and at least labelling one or more values of the trajectory detection data X in a database 1116 as being associated with one of the straight sections 802 or indicating the one of the straight sections on the graphical representation 801 on the display 1022 if the one or more values are classified by the first machine learning module 1500 as being associated with the one of the straight sections 802. 30. The system of example 29, wherein the first machine learning module 1500 is trained using training data (input data X) comprising at least WeAllWalk data, or the acceleration and the orientation of, pedestrians 306 comprising blind or visually impaired persons walking using a walking aid. 31 The system of example 29 or 30, wherein the first machine learning module comprises a GRU neural network. 32. The system of example 29 or 30, wherein: the classifying identifies or detects the plurality of straight sections 802 based on whether the vectors lie or are mapped to a first side or a second side of a hyperplane in a coordinate space of the vectors, the hyperplane determined using training reference data in a training database, the reference data comprising reference vectors associated with acceleration and orientation angle labeled during training of the first machine learning module as being associated with plurality of stored straight trajectories on the first side or non-straight trajectories on the second side; and the one or more programs label the one or more values of the input data in the database or indicate the one of the straight sections on the graphical representation on the display if the one or more of the vectors associated with the one or more values or straight sections are mapped by the first machine learning module to the first side of the hyperplane. 33. Fig. 15 and Fig. 3 illustrate an example of the system of any of the examples 29-32 wherein the first machine learning module comprises a recurrent neural network 1500 trained to identify, from the acceleration and the orientation data 204 comprising an azimuthal angle, each of the straight sections 802 comprising one or more time intervals during which the user 306 walks regularly or substantially on a straight path. 34. Fig. 3 and Fig. 15 illustrate an example of the system of any of the examples 29-33, wherein the first machine learning module 1500 is trained to disregard changes in the orientation ^^resulting from the user 306 comprising a visually impaired user stopping and rotating their body to re-orient or swerving to avoid a perceived obstacle. 35. The system of any of the examples 28-34, further comprising detecting each of the straight sections 802 after the one or more programs sample the orientation data for no more than 1 second. 36. The system of any of the examples 28-35, wherein the turn 804 by an angle of is tracked as two consecutive turns by . 37.` Fig. 1A and 1B illustrate an example of the system of any of the examples 28-36, wherein the one or more programs: transform coordinates of the input data into a heading agnostic reference frame, to obtain heading agnostic data X; detect the steps 806 as detected steps by associating impulses 100 in the heading agnostic data with the heel strikes (of the user’s 306 heel 308 on the ground) using a second machine learning module 1500; count a number of the detected steps 806 by associating the steps with a stride length 808, so as to output step data 102; determine the trajectory 800 using the detected steps 806 and the turns 804; and the trajectory comprises: one or more step displacement vectors defined 810 at each of the detected steps as having a length equal to the stride length and a direction comprising an azimuthal angle ^ obtained from the input data; and one or more turn displacement vectors 812 defined as having a length equal to the step stride and with the direction determined from the turns 804 detected by the turn detector. 38. The system of example 37, further comprising the one or more programs: transform the heading agnostic data into trajectory detection data X processable by a second machine learning module 1500; and at least classify or recognize one or more values of the trajectory detection data as being associated with steps 806 using the second machine learning module, or count the steps 806 using the second machine learning module. 39. The system of example 37 or 38, wherein the second machine learning module 1500 is trained using reference trajectory data outputted from another machine learning module identifying the user’s trajectory. 40. The system of any of the examples 37-39, wherein the second machine learning module comprises an LSTM neural network comprising no more than 2 layers 1502 and a hidden unit size of no more than 6. 41. The system of any of the examples 1-40, wherein the trajectory 800 is determined without reference to a map 500 of an environment in which the user is moving. 42. The system of any of examples 1-41, wherein the one or more programs: receive a map 500 of an environment in which the user is moving, the map identifying one or more impenetrable walls 502, and determine the trajectory 800 by comparing the trajectory to the map and eliminating one or more paths 504 in a trajectory that traverses the impenetrable walls. 43. The system of example 42, wherein the one or more programs: receive or obtain velocity vectors of the user 306 from the input data or another source; generate posterior locations of the user from the velocity vectors using a particle filtering module; and generate a mean shift estimating locations of the user corresponding to the highest modes of the posterior locations; and generate the trajectory by linking pairs of the estimated locations that share the largest number of the modes. 44. Fig. 12 illustrates an example method for determining turns 804 and/or straight sections 802 in a user’s 306 trajectory, comprising: capturing input data 1200, 204 comprising orientation data and acceleration from one or more sensors carried by a user taking steps along a trajectory; detecting 1204 a plurality of n straight sections 802 in the trajectory 800, each of the straight sections corresponding to the user 306 walking along a substantially straight path or a linear path; generating and tracking an orientation of the user 306 in each of the n straight sections 802, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; and detecting 1204 one or more turns 804 in the trajectory, wherein each of the turns 804 is a change in the estimated orientation of the user in the n th straight section 802a as compared to the estimated orientation in the (n-1) th straight section 802b. 45. The method of example 44, further comprising: storing the input data 200 in a database 1116 on a computer 1000; transforming 1202 the input data into trajectory detection data X processable by a first machine learning module 1500; classifying 1204 the trajectory detection data as representing motion in one of the straight sections (SW) 302 or in a non-straight section (non-SW) 304 using the first machine learning module 1500; and at least labelling 1214 one or more values of the trajectory detection data in a database as being associated with one of the straight sections 802 or indicating the one of the straight sections 802 on the graphical representation on the display 1022 if the one or more values are classified by the first machine learning module 1500 as being associated with the one of the straight sections 802. 46. Fig. 12 and 13 illustrates an example of the method of example 45 further comprising training the first machine learning module 1500 for detecting at least one of the straight sections 802 or turns in a trajectory 800, comprising: collecting 1200 a set of first pedestrian data, the first pedestrian data comprising orientation data and acceleration data for a walking pedestrian 306; applying 1202 one or more transformations to the first pedestrian data including smoothing to create a modified set of pedestrian data; creating 1302 a first training set from the modified set, comprising labelled straight walking sections and labeled non-straight walking sections; and training 1304A the first neural network module in a first stage using the first training set to identify the straight sections in the trajectory data using the orientation data and the acceleration data, the straight sections each corresponding to the user walking along the linear path. 47. The method of example 46, wherein the first training set comprises the modified set comprising data for the pedestrians 306 comprising blind or visually impaired persons walking using a walking aid comprising at least one of a cane or a guide dog. 48. The method of example 47, wherein the first pedestrian data comprises a WeAllWalk data set. 49. The method of any of the examples 46-48, wherein: the applying of the transformations comprises removing data, from the first pedestrian data, associated with the turn 804 comprising a 45 degree turn or 90 degree turns associated with a 45 degree turn; and the training comprises training the first machine learning module 1500 in the first stage, or another machine learning module, to detect, identify, or classify the turns 804 in the trajectory comprising 90 degree turns. 50. The method of any of the examples 46-49, wherein: the applying the transformations comprises converting the first pedestrian data corresponding to the turns 804 comprising 90 degree turns into two consecutive 45 degree turns; and the training comprises training the first machine learning module or another machine learning module, to detect, identify, or classify the turns 804 comprising the 45 degree turns. 51. The method of any of the examples 46-50, further comprising: creating a second training set 1302 from the modified set, the second training set comprising orientation turns 804 between adjacent ones of the straight sections 802; training 1304A the first machine learning module in a second stage, or another machine learning module, to detect, classify, or identify the turns in the trajectory using the second training set, using the second training set. 52. The method of example 51, further comprising: creating a third training set 1302 comprising ground truth turns obtained from a third database comprising ground truth data associated with the trajectory; training 1304A the first machine learning module in a third stage, or the another machine learning module 1500 using the ground truth turns, to detect, identify, or classify the turns in the trajectory using the third training set. 53. Fig. 12 and Fig. 13 further illustrate a method comprising, or the method of any of the examples 46-52 further comprising, training a second machine learning module 1500 for detecting steps in the trajectory 800, comprising: collecting 1200 a set of second pedestrian data comprising the acceleration of the pedestrian in three orthogonal directions and a rotation rate of the smartphone in the three orthogonal directions, wherein the acceleration is associated with steps taken by the pedestrian along the trajectory; applying 1202 one or more second transformations to the second pedestrian data, comprising transforming coordinates of the second pedestrian data into a heading agnostic reference frame, to obtain heading agnostic data; creating 1302 a second training set from the heading agnostic data, wherein steps are associated with: impulses 100 identified using the acceleration data, the impulses corresponding to heel strikes 308 of the walking pedestrian 306, in the heading agnostic data; and a stride length 808 of the pedestrian 306; and training 1304B the second machine learning module 1500 using the second training set to at least identify or count a number of the steps in the trajectory by associating the steps with the impulses. 54. The method of example 53, further comprising: creating a third training set 1302 comprising ground truth step data; and training the second machine learning module 1500 in a second stage to detect the steps 806 using the third training set. 55. Fig. 12 and Fig. 13 illustrate a method comprising, or the method of any of the examples 46-54 further comprising: training a third neural network module 1500 for mapping a trajectory, comprising: collecting 1200 a set of third pedestrian data comprising the trajectory mapped using the first machine learning module 1500 and the second machine learning module 1500; creating 1302 a third training set 1302 comprising ground truth data comprising waypoint time stamps; and training 1306 the third machine learning module to determine or map the trajectory by comparing a degree of alignment of the trajectory (mapped using the first neural network and the second neural network) with a ground truth path passing through the waypoint time stamps. 56. A computer implemented system 1000 of any of the preceding examples, comprising components stored in the memory that are executed by the processor comprising a map annotator (e.g., highlighted trajectory 801) that annotates the map and provides the instructions to a display 1022 component or speaker component of the mobile device to create or generate the highlighting or indicating of the trajectory; and a navigation component that extracts/receives data used by the map annotator to indicate the trajectory. 57. The computer implemented system or method of any of the preceding examples, comprising activating or utilizing a map updating the trajectory in real-time to provide navigation instructions in a real-world environment. 58. A navigation system or application or mapping system or odometer or application comprising the system of any of the examples 1-57. 59. The system or method of any of the examples, wherein ground truth data is data that is known to be real or true, provided by direct observation and measurement (i.e. empirical evidence) as opposed to information provided by inference. 60. Fig. 15 illustrates the system or method of any of the examples wherein the neural network 1500 (A) receives input data (e.g., training data comprising acceleration, azimuth angle from smartphone sensor) transformed into a format processable by the network via inputs X; (B) processes the input data using initialized variables (weights and biases and activation functions) in hidden layers; (C) outputs a predicted result (straight walking section, turn, step, trajectory); (D) compares the predicted result to an expected value (e.g., from training data) to produce an error; (E) propagates the error back through the same path and adjusts the variables (weights and biases) according to/in response the error (e.g., so as to reduce the magnitude of the error); (F) repeats steps (A)-(E) until the error is within an acceptable range; and (G) outputs one or more outputs comprising a prediction (straight walking section, turn, step, trajectory) made by applying these variables to a new unseen input data. Thus, in one or more embodiments, steps A-E can be repeated using the input data comprising training data and then used on new input data once the error is below a threshold value. In one or more examples, training sets are used to determine the weights, biases, and/or activation functions used to process the new input data using the neural network. In various examples, the methods and systems described herein are integrated into a practical application (e.g., computer implemented mapping system or navigation system) and improve functioning of the mapping system, navigation system, and/or computers implementing the mapping or navigation system. Embodiments of the systems described herein use the inertial sensors (accelerometer, gyro) of a regular smartphone to track the location of a pedestrian. Some embodiments may be particularly useful in indoor environments, where GPS cannot be used. In one or more examples, a system is designed to track the path of a pedestrian in two situations: when a map of the building is available, and it is not available. The data and results presented herein were obtained using software developed using Xcode for iOS. The following iOS Frameworks were used: Foundation, AVFoundation, UIKit, CoreMotion, CoreML, GameplayKit, simd. - Charts https://github.com/danielgindi/Charts - Firebase https://firebase.google.com - RoNIN https://github.com/Sachini/ronin - Python packages: numpy, pyproj, sklearn, pandas, tensorflow, h5py, quaternion, pytorch, onnx, spicy, keras, pathlib, matplotlib, pprint, tqdm, pylab, textwrap, pygame Example Recurrent Neural Network Fig. 15 illustrates an example recurrent neural network implementing machine learning modules according to embodiments described herein (e.g., examples 1-58 described above). The neural network 1500 (A) receives input data (e.g., acceleration, azimuth angle, orientation angles) transformed into a format processable by the network via inputs X; (B) processes the input data using initialized variables (weights and biases and activation functions) in hidden layers 1502; (C) outputs a predicted result; (D) compares the predicted result to an expected value to produce an error E; (E) propagates the error back through the same path and adjust the variables (weights and biases) according to/in response the error (e.g., .so as to reduce the magnitude of the error; (F) repeats steps (A)-(E) until the error is within an acceptable range; and (G) outputs one or more outputs 1504 comprising a prediction made by applying these variables to a new unseen input data. Thus, in one or more embodiments, steps A-E can be repeated using the input data comprising training data (e.g., training sets as described in examples 46-55) and then used on new input data once the error E is below a threshold value. In one or more examples, training sets are used to determine the weights, biases, and/or activation functions used to process the new input data using the neural network 1500. In one or more examples, the error E is determined using error metrics (e.g., the Loewenstein metric, comparison to ground truth data) or comparison to the error metrics, as described herein. References The following references are incorporated by reference herein 1. Guerreiro, J.; Ahmetovic, D.; Sato, D.; Kitani, K.; Asakawa, C. Airport Accessibility and Navigation Assistance for People with Visual Impairments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–14. 2. Murata, M.; Ahmetovic, D.; Sato, D.; Takagi, H.; Kitani, K.M.; Asakawa, C. Smartphone-Based Indoor Localization for Blind Navigation across Building Complexes. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom), Greece Athens, 19–23 March 2018; pp. 1–10. 3. Spachos, P.; Plataniotis, K.N. BLE Beacons for Indoor Positioning at an Interactive IoT-Based Smart Museum. IEEE Syst. J. 2020, 14, 3483–3493. 4. Wang, S.-S. A BLE-Based Pedestrian Navigation System for Car Searching in Indoor Parking Garages. Sensors 2018, 18, 1442. 5. Kriz, P.; Maly, F.; Kozel, T. Improving Indoor Localization Using Bluetooth Low Energy Beacons. Mob. Inf. Syst. 2016, 2016. 6. He, S.; Chan, S.-H.G. Wi-Fi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons. IEEE Commun. Surv. Tutor. 2015, 18, 466–490. 7. Scaramuzza, D.; Fraundorfer, F. Visual Odometry [Tutorial]. IEEE Robot. Autom. Mag. 2011, 18, 80–92. 8. Zhang, R.; Yang, H.; Höflinger, F.; Reindl, L.M. Adaptive Zero Velocity Update Based on Velocity Classification for Pedestrian Tracking. IEEE Sens. J. 2017, 17, 2137–2145. 9. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; Massachusetts Institute of Technology: Cambridge, MA, USA, 2005. 10. Yan, H.; Herath, S.; Furukawa, Y. RoNIN: Robust Neural Inertial Navigation in the Wild: Benchmark, Evaluations, and New Methods. arXiv arXiv:1905.128532019. 11. Yan, H.; Shan, Q.; Furukawa, Y. RIDI: Robust IMU Double Integration. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 621–636. 12. Chen, C.; Zhao, P.; Lu, C.X.; Wang, W.; Markham, A.; Trigoni, N. Oxiod: The Dataset for Deep Inertial Odometry. arXiv arXiv:1809.074912018. 13. Hallemans, A.; Ortibus, E.; Meire, F.; Aerts, P. Low Vision Affects Dynamic Stability of Gait. Gait Posture 2010, 32, 547–551. 14. Iosa, M.; Fusco, A.; Morone, G.; Paolucci, S. Effects of Visual Deprivation on Gait Dynamic Stability. Sci. World J.2012, 2012. 15. Tomomitsu, M.S.; Alonso, A.C.; Morimoto, E.; Bobbio, T.G.; Greve, J. Static and Dynamic Postural Control in Low-Vision and Normal-Vision Adults. Clinics 2013, 68, 517–521. 16. Flores, G.H.; Manduchi, R. Weallwalk: An Annotated Dataset of Inertial Sensor Time Series from Blind Walkers. ACM Trans. Access. Comput. (TACCESS) 2018, 11, 1–28. 17. Jacobson, W. Orientation and mobility. In Assistive Technology for Blindness and Low Vision; CRC Press: Boca Raton, FL, USA, 2012. 18. Flores, G.; Manduchi, R. Easy Return: An App for Indoor Backtracking Assistance. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. 19. Yoon, C.; Louie, R.; Ryan, J.; Vu, M.; Bang, H.; Derksen, W.; Ruvolo, P. Leveraging Augmented Reality to Create Apps for People with Visual Disabilities: A Case Study in Indoor Navigation. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 18– 30 October 2019; pp. 210–221. 20. Microsoft. Path Guide: Plug-and-Play Indoor Navigation. Available online: https://www.microsoft.com/en-us/research/project/path-guide- plug-play- indoor-navigation/ (accessed on 14 November 2020). 21. Hsuan Tsai, C.; Peng, R.; Elyasi, F.; Manduchi, R. Finding Your Way Back: Comparing Path Odometry Algorithms for Assisted Return. In Proceedings of the 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and Other Affiliated Events (PerCom Workshops), Kassel, Germany, 22–26 March 2021. 22. Tian, Q.; Salcic, Z.; Kevin, I.; Wang, K.; Pan, Y. An Enhanced Pedestrian Dead Reckoning Approach for Pedestrian Tracking Using Smartphones. In Proceedings of the 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), Singapore, 7–9 April 2015; pp. 1–6. 23. Jin, Y.; Toh, H.-S.; Soh, W.-S.; Wong, W.-C. A Robust Dead- Reckoning Pedestrian Tracking System with Low Cost Sensors. In Proceedings of the 2011 IEEE International Conference on Pervasive Computing and Communications (PerCom), Seattle, WA, USA, 21–25 March 2011; pp. 222–230. 24. Pai, D.; Sasi, I.; Mantripragada, P.S.; Malpani, M.; Aggarwal, N. Padati: A Robust Pedestrian Dead Reckoning System on Smartphones. In Proceedings of the 2012 IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications, Liverpool, UK, 25–27, June 2012; pp. 2000–2007. 25. Zhao, H.; Zhang, L.; Qiu, S.; Wang, Z.; Yang, N.; Xu, J. Pedestrian Dead Reckoning Using Pocket-Worn Smartphone. IEEE Access 2019, 7, 91063– 91073. 26. Xiao, Z.; Wen, H.; Markham, A.; Trigoni, N. Robust Indoor Positioning with Lifelong Learning. IEEE J. Sel. Areas Commun. 2015, 33, 2287– 2301. 27. Xiao, Z.; Wen, H.; Markham, A.; Trigoni, N. Robust Pedestrian Dead Reckoning (R-PDR) for Arbitrary Mobile Device Placement. In Proceedings of the 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Busan, Korea, 27–30 October 2014; pp. 187–196. 28. Harle, R. A Survey of Indoor Inertial Positioning Systems for Pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. 29. Alzantot, M.; Youssef, M. UPTIME: Ubiquitous Pedestrian Tracking Using Mobile Phones. In Proceedings of the 2012 IEEE Wireless Communications and Networking Conference (WCNC), Paris, France, 1–4 April 2012; pp. 3204–3209. 30. Jayalath, S.; Abhayasinghe, N.; Murray, I. A Gyroscope Based Accurate Pedometer Algorithm. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbeliard, France, 28–31 October 2013; Volume 28, p. 31st. 31. Gu, F.; Khoshelham, K.; Shang, J.; Yu, F.; Wei, Z. Robust and Accurate Smartphone-Based Step Counting for Indoor Localization. IEEE Sens. J. 2017, 17, 3453–3460. 32. Edel, M.; Köppe, E. An Advanced Method for Pedestrian Dead Reckoning Using BLSTM-RNNs. In Proceedings of the 2015 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Banff, AB, Canada, 13–16 October 2015; pp. 1–6. 33. Kang, J.; Lee, J.; Eom, D.-S. Smartphone-Based Traveled Distance Estimation Using Individual Walking Patterns for Indoor Localization. Sensors 2018, 18, 3149. 34. Yoshida, T.; Nozaki, J.; Urano, K.; Hiroi, K.; Kaji, K.; Yonezawa, T.; Kawaguchi, N. Sampling Rate Dependency in Pedestrian Walking Speed Estimation Using DualCNN-LSTM. In Proceedings of the Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 2019; pp. 862–868. 35. Solin, A.; Cortes, S.; Rahtu, E.; Kannala, J. Inertial Odometry on Handheld Smartphones. In Proceedings of the 201821st International Conference on Information Fusion (FUSION), Cambridge UK, 10–13 July 2018; pp. 1–5. 36. Kok, M.; Hol, J.D.; Schön, T.B. Using Inertial Sensors for Position and Orientation Estimation. arXiv arXiv:1704.060532017. 37. Marron, J.J.; Labrador, M.A.; Menendez-Valle, A.; Fernandez-Lanvin, D.; Gonzalez-Rodriguez, M. Multi Sensor System for Pedestrian Tracking and Activity Recognition in Indoor Environments. Int. J. Ad Hoc Ubiquitous Comput. 2016, 23, 3–23. 38. Flores, G.H.; Manduchi, R.; Zenteno, E.D. Ariadne’s Thread: Robust Turn Detection for Path Back-Tracing Using the IPhone; IEEE: 2014. 39. Roy, N.; Wang, H.; Roy Choudhury, R. I Am a Smartphone and i Can Tell My User’s Walking Direction. In Proceedings of the Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, Bretton Woods, NH, USA, 16–19 June 2014; pp. 329–342. 40. Kunze, K.; Lukowicz, P.; Partridge, K.; Begole, B. Which Way Am I Facing: Inferring Horizontal Device Orientation from an Accelerometer Signal. In Proceedings of the 2009 International Symposium on Wearable Computers, NW Washington, DC, USA, 4–7 September 2009; pp. 149–150. 41. Steinhoff, U.; Schiele, B. Dead Reckoning from the Pocket-an Experimental Study. In Proceedings of the 2010 IEEE international conference on pervasive computing and communications (PerCom), Mannheim, Germany, 29 March–2 April 2010; pp. 162–170. 42. Qian, J.; Ma, J.; Ying, R.; Liu, P.; Pei, L. An Improved Indoor Localization Method Using Smartphone Inertial Sensors. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbeliard, France, 28–31 October 2013; pp. 1–7. 43. Brajdic, A.; Harle, R. Walk Detection and Step Counting on Unconstrained Smartphones. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, Zurich, Switzerland, 8–12 September 2013; pp. 225–234. 44. Scholkmann, F.; Boss, J.; Wolf, M. An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals. Algorithms 2012, 5, 588–603. 45. Mannini, A.; Sabatini, A.M. A Hidden Markov Model-Based Technique for Gait Segmentation Using a Foot-Mounted Gyroscope. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 4369– 4373. 46. Yao, Y.; Pan, L.; Fen, W.; Xu, X.; Liang, X.; Xu, X. A Robust Step Detection and Stride Length Estimation for Pedestrian Dead Reckoning Using a Smartphone. IEEE Sens. J. 2020, 20, 9685–9697. 47. Ho, N.-H.; Truong, P.H.; Jeong, G.-M. Step-Detection and Adaptive Step-Length Estimation for Pedestrian Dead-Reckoning at Various Walking Speeds Using a Smartphone. Sensors 2016, 16, 1423. 48. Sun, Y.; Wu, H.; Schiller, J. A Step Length Estimation Model for Position Tracking. In Proceedings of the 2015 International Conference on Localization and GNSS (ICL-GNSS), Gothenburg, Sweden, 22–24 June 2015; pp. 1– 6. 49. Racko, J.; Brida, P.; Perttula, A.; Parviainen, J.; Collin, J. Pedestrian Dead Reckoning with Particle Filter for Handheld Smartphone. In Proceedings of the 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcalá de Henares, Spain, 4–7 October 2016; pp. 1–7. 50. Klepal, M.; Beauregard, S. A Novel Backtracking Particle Filter for Pattern Matching Indoor Localization. In Proceedings of the First ACM International Workshop on Mobile Entity Localization and Tracking in GPS-Less Environments, San Francisco, CA, USA, 19 September 2008; pp. 79–84. 51. Leppäkoski, H.; Collin, J.; Takala, J. Pedestrian Navigation Based on Inertial Sensors, Indoor Map, and WLAN Signals. J. Signal Process. Syst. 2013, 71, 287–296. 52. Bao, H.; Wong, W.-C. An Indoor Dead-Reckoning Algorithm with Map Matching. In Proceedings of the 20139th International Wireless Communications and Mobile Computing Conference (IWCMC), Sardinia, Italy, 1–5 July 2013; pp. 1534–1539. 53. Chen, C.; Lu, X.; Markham, A.; Trigoni, N. Ionet: Learning to Cure the Curse of Drift in Inertial Odometry. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. 54. Cortés, S.; Solin, A.; Kannala, J. Deep Learning Based Speed Estimation for Constraining Strapdown Inertial Navigation on Smartphones. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018; pp. 1–6. 55. Feigl, T.; Kram, S.; Woller, P.; Siddiqui, R.H.; Philippsen, M.; Mutschler, C. RNN-Aided Human Velocity Estimation from a Single IMU. Sensors 2020, 20, 3656. 56. Jamil, F.; Iqbal, N.; Ahmad, S.; Kim, D.-H. Toward Accurate Position Estimation Using Learning to Prediction Algorithm in Indoor Navigation. Sensors 2020, 20, 4410. 57. Kawaguchi, N.; Nozaki, J.; Yoshida, T.; Hiroi, K.; Yonezawa, T.; Kaji, K. End-to-End Walking Speed Estimation Method for Smartphone PDR Using DualCNN-LSTM. In Proceedings of the IPIN (Short Papers/Work-in-Progress Papers), Pisa, Italy, 30 September–3 October 2019; pp. 463–470. 58. Wang, Q.; Luo, H.; Ye, L.; Men, A.; Zhao, F.; Huang, Y.; Ou, C. Personalized Stride-Length Estimation Based on Active Online Learning. IEEE Internet Things J. 2020, 7, 4885–4897. 59. Klein, I.; Asraf, O. StepNet—Deep Learning Approaches for Step Length Estimation. IEEE Access 2020, 8, 85706–85713. 60. Feigl, T.; Kram, S.; Woller, P.; Siddiqui, R.H.; Philippsen, M.; Mutschler, C. A Bidirectional LSTM for Estimating Dynamic Human Velocities from a Single IMU. In Proceedings of the 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Pisa, Italy, 30 September–3 October 2019; pp. 1–8. 61. Wagstaff, B.; Kelly, J. LSTM-Based Zero-Velocity Detection for Robust Inertial Navigation. In Proceedings of the 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Nantes, France, 24–27 September 2018; pp. 1–8. 62. Kayukawa, S.; Ishihara, T.; Takagi, H.; Morishima, S.; Asakawa, C. Guiding Blind Pedestrians in Public Spaces by Understanding Walking Behavior of Nearby Pedestrians. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2020; Volume 4, pp. 1–22. 63. Fusco, G.; Coughlan, J.M. Indoor Localization for Visually Impaired Travelers Using Computer Vision on a Smartphone. In Proceedings of the 17th International Web for All Conference, Taipei, Taiwan, 20–21 April 2020; pp. 1–11. 64. Leung, T.-S.; Medioni, G. Visual Navigation Aid for the Blind in Dynamic Environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 565–572. 65. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K.; Takagi, H.; Asakawa, C. NavCog: A Navigational Cognitive Assistant for the Blind. In Proceedings of the Proceedings of the 18th International Conference on Human- Computer Interaction with Mobile Devices and Services, Florence, Italy, 6–9 September 2016; pp. 90–99. 66. Riehle, T.H.; Anderson, S.M.; Lichter, P.A.; Giudice, N.A.; Sheikh, S.I.; Knuesel, R.J.; Kollmann, D.T.; Hedin, D.S. Indoor Magnetic Navigation for the Blind. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 1972–1975. 67. Fallah, N.; Apostolopoulos, I.; Bekris, K.; Folmer, E. The User as a Sensor: Navigating Users with Visual Impairments in Indoor Spaces Using Tactile Landmarks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin TX, USA, 5–10 May 2012; pp. 425–432. 68. Riehle, T.H.; Anderson, S.M.; Lichter, P.A.; Whalen, W.E.; Giudice, N.A. Indoor Inertial Waypoint Navigation for the Blind. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5187–5190. 69. Wang, Q.; Ye, L.; Luo, H.; Men, A.; Zhao, F.; Huang, Y. Pedestrian Stride-Length Estimation Based on LSTM and Denoising Autoencoders. Sensors 2019, 19, 840. 70. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. 71. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. ArXiv arXiv:1412.3555 2014. 72. Paniit, S.M.; Zhang, W. Modeling Random Gyro Drift Rate by Data Dependent Systems. IEEE Trans. Aerosp. Electron. Syst. 1986, 455–460. 73. Chen, R.; Liu, J.S. Mixture Kalman Filters. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2000, 62, 493–508. 74. Cheng, Y. Mean Shift, Mode Seeking, and Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 790–799. 75. Trinh, V.; Manduchi, R. Semantic Interior Mapology: A Toolbox for Indoor Scene Description from Architectural Floor Plans. In Proceedings of the 24th International Conference on 3D Web Technology, Los Angeles, CA, USA, 26–28 July 2019; NIH Public Access; Volume 2019. 76. Horvat, M.; Ray, C.; Ramsey, V.K.; Miszko, T.; Keeney, R.; Blasch, B.B. Compensatory Analysis and Strategies for Balance in Individuals with Visual Impairments. J. Vis. Impair. Blind. 2003, 97, 695–703. 77. Koskimäki, H. Avoiding Bias in Classification Accuracy-a Case Study for Activity Recognition. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 301–306. 78. Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A Benchmark for the Evaluation of RGB-D SLAM Systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 573–580. 79. Seyler, S.L.; Kumar, A.; Thorpe, M.F.; Beckstein, O. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways. PLoS Comput. Biol. 2015, 11, e1004568. Conclusion This concludes the description of the preferred embodiment of the present invention. The foregoing description of one or more embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto