Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISTRIBUTED MULTI-TASK MACHINE LEARNING FOR TRAFFIC PREDICTION
Document Type and Number:
WIPO Patent Application WO/2023/276420
Kind Code:
A1
Abstract:
A distributed machine learning based traffic prediction method is provided for predicting traffic of roads. In this case, the distributed machine learning based traffic prediction method includes distributing global multi-task traffic models by a learning server to learning agents to locally train the traffic models, uploading locally trained traffic models by learning agents to the learning server, updating global multi-task traffic models by the learning server using locally trained traffic model parameters acquired from learning agents, generating a time-dependent global traffic map by the learning server using the well trained global multi-task traffic models, distributing the time dependent global traffic map to vehicles traveling on the roads, and computing an optimal travel route with the least travel time by a vehicle using the time-dependent global traffic map based on a driving plan.

Inventors:
GUO JIANLIN (US)
ZENG TENGCHAN (US)
KIM KYEONG JIN (US)
PARSONS KIERAN (US)
ORLIK PHILIP (US)
DI CAIRANO STEFANO (US)
Application Number:
PCT/JP2022/018345
Publication Date:
January 05, 2023
Filing Date:
April 14, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
International Classes:
G08G1/01
Foreign References:
US20190311298A12019-10-10
Other References:
WILBUR MICHAEL ET AL: "Time-dependent Decentralized Routing using Federated Learning", 2020 IEEE 23RD INTERNATIONAL SYMPOSIUM ON REAL-TIME DISTRIBUTED COMPUTING (ISORC), IEEE, 19 May 2020 (2020-05-19), pages 56 - 64, XP033778097, DOI: 10.1109/ISORC49007.2020.00018
YI LIU ET AL: "Privacy-preserving Traffic Flow Prediction: A Federated Learning Approach", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 March 2020 (2020-03-19), XP081624790
Attorney, Agent or Firm:
FUKAMI PATENT OFFICE, P.C. (JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

A computer-implemented distributed machine learning based traffic prediction method for predicting traffic of roads, comprising: distributing global multi-task traffic models by a learning server to learning agents to locally train the traffic models; uploading locally trained global multi-task traffic models to the learning server, wherein the locally trained global multi-task traffic models have been trained by the learning agents; updating the global multi-task traffic models by the learning server using the locally trained global multi-task traffic models uploaded from the learning agents; generating a time-dependent global traffic map by the learning server using the updated global multi-task traffic models; and distributing the time-dependent global traffic map to each of vehicles traveling on the roads.

[Claim 2]

The method of claim 1 , wherein a learning server designs the global multi-task traffic models in the form of traffic prediction neural networks, wherein a learning server can be a roadside unit or an eNodeB or cloud. [Claim 3]

The method of claim 1, wherein distributed learning agents train the global multi-task traffic models by using their own local data, wherein the learning agents can be infrastructure data collectors, wherein each data collector is located on road networks and configured to collect traffic data at the arranged location.

[Claim 4] The method of claim 1 , wherein the learning server is configured to design data cluster rules and distribute data cluster rules to the learning agents. [Claim 5]

The method of claim 4, wherein each learning agent is configured to cluster its local data using rules distributed.

[Claim 6]

The method of claim 1 , wherein the distributed multi-task machine learning is performed multiple rounds, wherein in each round, the learning server distributes the updated traffic models to the learning agents, wherein each learning agent trains the received traffic models for multiple iterations using its local data, wherein each learning agent uploads locally trained traffic models onto the learning server, wherein the learning round continues until the traffic models are well-trained.

[Claim 7]

The method of claim 6, wherein the learning server is configured to build global multi-task traffic models by aggregating the locally trained traffic models received from the learning agents, wherein the learning server aggregates the global multi-task traffic models by solving an optimization problem as with Fm(wm) defined as where with (xm,n,ij ym,n,i) is the 1-th training data sample belonging to cluster m at data collector n with Sm,n as the total number of such data samples. S(m) refers to the total number of training data samples belonging to cluster m across all data collectors, Fm,n(wm) denotes the loss function of cluster m at data collector n and K is the number of the model parameters.

[Claim 8]

The method of claim 1 , wherein the learning server and learning agents collaboratively build the time-dependent global traffic map, wherein the time- dependent global traffic map is modeled as a time-dependent graph, wherein each edge of the time-dependent graph is the predicted directional travel time on the road-segment connecting two vertices of the graph.

[Claim 9]

The method of claim 8, wherein the learning server distributes well- trained global traffic models to learning agents for travel time prediction, wherein each learning agent computes travel time on road-segments by predicting traffic speed using global traffic models and its local data, wherein the traffic speed prediction is multi-horizon by predicting short time traffic speed, middle time traffic speed and long time traffic speed.

[Claim 10]

The method of claim 1 , wherein a vehicle uses time-dependent graph to determine a route leaving the current location s at time ts to the destination d with the least travel time by solving following optimization problem S.t. where td(s,ts) denotes travel time leaving location s at time ts to destination location d, the first constraint is due to the fact that the vehicle departures s at time ts and the second constraint shows that the arrival time at vertex equals the sum of the departure time at and the travel time on road sub- segment at time ti.

[Claim 11]

The method of claim 1, wherein the road networks are generated by converting a map stored in a map server, wherein the map is provided so as to cover a large scale geometric region such as a city.

[Claim 12]

The method of claim 11 , wherein vertices of the time-dependent graph include vertices corresponding to intersections or connecting points of two adjoining road segments, wherein edges of the time-dependent graph are the road segments connecting the adjacent vertices.

[Claim 13]

The method of claim 11, wherein the road networks are converted to a time-dependent graph based on the well-trained traffic prediction neural networks.

[Claim 14]

The method of claim 1 , wherein a driving plan includes a current location, a destination, time, date, via-locations on the road networks or at least part of combinations of thereof.

[Claim 15]

The method of claim 1 , wherein each learning agent trains traffic prediction neural networks corresponding different traffic tasks. [Claim 16]

The method of claim 12, wherein an edge is configured to include at least one learning agent.

[Claim 17]

The method of claim 8, wherein travel time on an edge of the time- dependent graph is computed by using piece-wise traffic speed prediction based on prediction time interval At.

[Claim 18]

The method of claim 1, wherein the distributed machine learning can be realized by federated learning (FL) technique.

[Claim 19]

A local traffic prediction agent for providing locally trained traffic models to a learning server, comprising: an interface configured to collect local traffic data from sensors arranged on a road network, wherein the interface is configured to acquire multi-task traffic models and data cluster rules from the learning server via a communication network; a memory configured to store the local traffic data, the data cluster rules and trained traffic models, traffic prediction neural networks; a processor, in connection with the memory, configured to: locally train the traffic prediction neural networks to update the acquired multi-task traffic models of the traffic prediction neural networks using the local traffic data based on the data cluster rules; and upload the updated locally trained multi-task traffic models to the learning server via the interface using the communication network. [Claim 20]

A learning server for providing traffic prediction to a vehicle traveling on a road network, comprising: a transceiver configured to acquiring trained multi-task parameters of traffic prediction neural networks from a local traffic prediction agent of claim 19 via a communication network, wherein the local traffic prediction agent is arranged at a location on the road network; a memory configured to store traffic data, a global time-dependent map, traffic prediction neural networks, trained multi-task traffic models and the map of road network; one or more processor, in connection with the memory, configured to perform steps of: updating of the traffic prediction neural networks using the trained multi-task parameters; generating an updated global time-dependent traffic map based on the trained multi-task traffic models; distributing the updated global time-dependent traffic map to the vehicle traveling on the road network; and distributing data clustering rules to the local traffic prediction agents. [Claim 21]

A distributed machine learning based traffic predication system for providing traffic prediction to vehicles traveling on a road network, comprising: ' at least one local traffic prediction agent of claim 19; at least one learning server of claim 20; and a communication network configured to connect the at least one local traffic prediction agent and the at least one learning server, at least one roadside unit and vehicles traveling the road network.

Description:
[DESCRIPTION]

[Title of Invention]

DISTRIBUTED MULTI-TASK MACHINE LEARNING FOR TRAFFIC

PREDICTION

[Technical Field]

[0001] The invention relates generally to machine learning for vehicular traffic systems, and more particularly to methods and apparatus of the distributed multi-task machine learning for vehicular traffic prediction and its application to route planning.

[Background Art]

[0002] Intelligent transportation system becomes more and more important for smart cities as the number of connected vehicles and autonomous vehicles increases rapidly. Different from conventional vehicles, the connected vehicles and the autonomous vehicles are much intelligent. They are not only capable of collecting various vehicle data and traffic data but also capable of running advanced algorithms to guide their mobility. In addition, there are much more traffic data collected by transportation infrastructure data collectors and mobile devices. For example, California Caltrans Performance Measurement System (PeMS) has installed hundreds of thousands of data collectors across the state to collect various traffic data. Each data collector collects traffic data every five minutes. Furthermore, mobile devices such as smart phones can also collect traffic data and provide crowdsource information. As a result, there are huge amount of traffic data available. It is critical to utilize the vehicle intelligence and the rich traffic data to improve driving safety, travel time, energy efficiency, air pollution reduction, etc.

[0003] However, realizing intelligent traffic is an extremely difficult problem. Physical roads form a complex road network. The most importantly, traffic conditions such as congestion at one location can propagate to and impact on traffic conditions at other locations. Furthermore, the unexpected events such as traffic accident and driver behave can make the traffic condition even more dynamic and uncertain. Therefore, how to accurately predict traffic and apply the prediction to plan route is challenging but yet demanding because the route planning is probably the most popular application for vehicles. But combining existing technologies is not able to solve the problem.

[0004] There are existing route planning methods. However, the existing route planning methods relying on crowdsourcing information can make misleading decision. For example, if one vehicle carries 100 mobile devices, the crowdsourcing based route planning methods may think there are 100 vehicles, which will result in much higher traffic density. As a result, the corresponding road may be considered as congested. In addition, existing route planning methods make route planning based on current traffic information without prediction. However, vehicular traffic system is a very dynamic system. Traffic condition varies from time to time. Therefore, to make optimal route planning, intelligent traffic prediction methods are required. The intelligent traffic prediction methods heavily rely on machine learning techniques.

[0005] Machine learning techniques have been applied in vehicle mobility management. For example, in autonomous driving vehicles, different types of sensors are employed to collect data and various machine learning algorithms are used to learn and analyze data for controlling and guiding vehicle motion. For connected vehicles, machine leaning techniques can be used at the infrastructure such as cloud to realize centralized learning and therefore, remotely control vehicle mobility. [0006] To plan a route, prediction techniques must be able to make short time prediction, middle time prediction and long time prediction. However, on board learning in autonomous vehicle and the centralized learning at cloud are not practical for route planning. Firstly, machine learning algorithms need to be trained using large amount of different data types. A vehicle can only collect the limited data because a vehicle can only see scene within its view range and cannot see things blocked by objects such as buildings and other vehicles even if objects are close to it. The limited data is able to train and predict instant local traffic for instant motion planning. However, the instant prediction is not suitable to make longer time route planning. Secondly, even cloud has potential to collect sufficient amount of data to train machine learning algorithms, there are multiple challenges present: 1) the cloud or centralized learning relies on communication, but communication bandwidth is limited and therefore, it is impractical for data collectors such as vehicles to send all data to cloud; 2) data privacy policy may prevent data collectors to transfer data to cloud because machine learning algorithms are able to learn data collector’s privacy and driver’s personal information; and 3) security policy may also prevent data collectors to send their data to cloud or centralized server, e.g., security attackers may intercept data and locate driver’s location for dangerous action.

[0007] Efficient traffic requires efficient road utilization. To do that, route planning algorithms must, make optimal route planning to minimize traffic congestion and reduce travel time. To that end, traffic prediction technique becomes critical for efficient traffic. As described above, the conventional centralized learning and emerging on board learning cannot make feasible traffic prediction. The existing route planning methods cannot plan optimal route due to the lack of the traffic prediction. Therefore, it is desirable to provide an accurate and practical vehicular traffic prediction mechanism to perform optimal route planning for intelligent traffic.

[0008] Accordingly, there is a need to provide a method for accurate vehicular traffic prediction and a route planning method to plan optimal route by using the predicted traffic.

[Summary of Invention]

[0009] It is one object of some embodiments to provide a distributed machine learning based vehicular traffic prediction method that can accurately predict short time, middle time and long time traffic for both city roads and highways. Additionally, it is another object of some embodiments to make optimal route planning for vehicles to minimize traffic congestion and travel time by using the accurate traffic prediction.

[0010] Some embodiments are based on the recognition that vehicular traffic data have been widely collected by transportation infrastructure data collector and vehicles. How to utilize data available to optimize vehicular traffic becomes an issue to be addressed. Due to facts such as communication bandwidth limitation, privacy protection and security, it is impractical to transfer all data to central server for centralized data analysis and traffic prediction. On the other hand, the limited amount of data at an individual data collector is not feasible to train machine learning algorithm and make large scale traffic prediction for a city or a state because a data collector does not know the traffic conditions at other locations. For example, a vehicle cannot predict the traffic where vehicle has not yet traveled. Therefore, conventional machine learning approaches are not suitable to make traffic prediction for optimal route planning.

[0011] To that end, some embodiments of the invention utilize the distributed machine learning techniques such as federated learning to build robust traffic models for accurate traffic prediction, wherein the infrastructure devices such as IEEE DSRC/WAVE roadside unit (RSU) and/or 3 GPP C- V2X eNodeB and/or remote server act as learning server and data collectors serve as learning agents. A learning server coordinates distributed learning among a set of data collectors. A learning server first designs and distributes the traffic models such as neural networks to the set of data collectors for the first round of the distributed training. Each data collector then trains the received traffic models independently by using its own data without sharing its data with other data collectors and learning server. After certain iterations of training process, each data collector sends the trained traffic models to the learning server, which then aggregates the received traffic models from all data collectors to generate the common global traffic models. Upon completion of the model aggregation, the learning server re-distributes the aggregated traffic models to data collectors for the second round of training process. This process of training and aggregation continues until the robust traffic models are built.

[0012] It must be recognized that even data collectors are distributed at different geometric locations, the traffic data collected among data collectors may be correlated because in vehicular environment, traffic condition at one location can propagate to other locations and impact on traffic conditions at other locations. Therefore, data collected at one location can also impact on data collected at other locations. In addition, traffic patterns at different locations can be different. For example, traffic pattern at an intersection is different from traffic pattern on the freeway.

[0013] To that end, it is desirable to have different but yet correlated traffic models to be trained by distributed data collectors. To that end, some embodiments are based on realization that multi-task distributed learning techniques are suitable to predict large scale traffic for route planning. Accordingly, learning server designs different but yet correlated traffic models to be distributed to data collectors such that traffic models for closer data collectors have closer relationship. Take neural network based traffic model for example, the closer data collectors can share some weight parameters. However, the neural network models for data collectors far away from each other do not share parameter.

[0014] Some embodiments are based on the recognition that data collectors reflect location information because data collector are distributed at different locations. Besides location factor, there are other factors, e.g., time, weather, road condition and special event, that can also impact traffic environment. At same location, traffic condition varies based on different time, different weather, etc. Rush hour traffic condition is much different from off hour traffic condition. Snow day traffic condition is much different from sunny day traffic condition.

[0015] To that end, it is desirable that data collectors divide their data into different clusters based on time, weather, etc. Accordingly, learning server defines a set of rules and distributes the rules to data collectors to cluster their data. As a result, data collectors train different traffic models by using different data clusters. Data collectors do not train traffic models for which data collectors do not have appropriate data. Therefore, data collectors only send trained traffic models to learning server.

[0016] Accordingly, the learning server build common global traffic models by aggregating the locally trained traffic models by considering information including location, time, weather, etc.

[0017] According to some embodiments of the present invention, a distributed machine learning based traffic prediction method can be provided for predicting traffic of roads. In this case, the distributed machine learning based traffic prediction method may a computer-implemented distributed machine learning based traffic prediction method for predicting traffic of roads. The method may include distributing global multi-task traffic models by a learning server to learning agents to locally train the traffic models; uploading locally trained global multi-task traffic models to the learning server, wherein the locally trained global multi-task traffic models have been trained by the learning agents; updating the global multi-task traffic models by the learning server using the locally trained global multi-task traffic models uploaded from the learning agents; generating a time-dependent global traffic map by the learning server using the updated global multi-task traffic models; and distributing the time-dependent global traffic map to each of vehicles traveling on the roads.

[0018] Further, according to some embodiments of the resent invention, a local traffic prediction agent can be provided for providing locally trained traffic models to a learning server. The local traffic prediction agent may be a hardware device or a software which can be referred to as a local traffic prediction agent stored in a device including at least one memory and at least one processor. The local traffic prediction agent may include an interface configured to collect local traffic data from sensors arranged on a road network, wherein the interface is configured to acquire multi-task traffic models and data cluster rules from the learning server via a communication network; a memory configured to store the local traffic data, the data cluster rules and trained traffic models, traffic prediction neural networks; a processor, in connection with the memory, configured to: locally train the traffic prediction neural networks to update the acquired multi-task traffic models of the traffic prediction neural networks using the local traffic data based on the data cluster rules; and upload the updated locally trained multi- task traffic models to the learning server via the interface using the communication network. [0019] Yet, further, some embodiments of the present invention can provide a distributed machine learning based traffic predication system for providing traffic prediction to a vehicle traveling on a road network. The system may include at least one local traffic prediction agent as described and at least one learning server described above, and a communication network configured to connect the at least one local traffic prediction agent and the at least one learning server, at least one roadside unit and vehicles traveling the road network.

[0020] The at least one local traffic prediction agent may include an interface configured to collect local traffic data from sensors arranged on a road network, wherein the interface is configured to acquire multi-task traffic models and data cluster rules from the learning server via a communication network; a memory configured to store the local traffic data, the data cluster rules and trained traffic models, traffic prediction neural networks; a processor, in connection with the memory, configured to: locally train the traffic prediction neural networks to update the acquired multi-task traffic models of the traffic prediction neural networks using the local traffic data based on the data cluster rules; and upload the updated locally trained multi- task traffic models to the learning server via the interface using the communication network.

[0021] The at least one learning server may include a transceiver configured to acquiring trained multi-task parameters of traffic prediction neural networks from a local traffic prediction agent described above via a communication network, wherein the local traffic prediction agent is arranged at a location on the road network; a memory configured to store traffic data, a global time-dependent map, traffic prediction neural networks, trained multi- task traffic models and the map of road network; one or more processor, in connection with the memory, configured to perform steps of: updating of the traffic prediction neural networks using the trained multi-task parameters; generating an updated global time-dependent traffic map based on the trained multi-task traffic models; distributing the updated global time-dependent traffic map to the vehicle traveling on the road network; and distributing data clustering rules to the local traffic prediction agents.

[0022] The system may include an input interface configured to update model parameters (learned models) of traffic prediction neural networks at a learning server by acquiring trained parameters from learning agents via an input interface, wherein each learning agent is arranged at a location on the road networks, wherein each learning agent is configured to train multi-task traffic models by collecting traffic data (pattern) at the arranged location; generating a global time-dependent traffic map based on the well-trained multi-task traffic models; determining a driving plan by a vehicle traveling on the road networks; and computing an optimal route with the least travel time by a vehicle based on the driving plan and the global time-dependent map. [0023] Some embodiments are based on the recognition that each application in vehicular environment has different requirements. Therefore, different technologies must be developed for different applications. Route planning requires large scale traffic prediction with different time horizons including short time prediction, middle time prediction and long time prediction.

[0024] Accordingly, some embodiments of the current invention provide multi-horizon traffic prediction such that for each short time prediction or middle time prediction or long time prediction, traffic is predicted with multi- horizon in time domain. A prediction time horizon consists of multiple prediction time periods. For example, a short time horizon may consist of 5 prediction periods, a middle time horizon may include 20 prediction periods and a long time horizon may consist of 50 prediction periods, where a prediction period represents a At time interval, e.g., for At = 5 minute, the traffic is predicted every 5 minute. As a result, in a short time horizon, traffic is predicted 5 times, in a middle time horizon, traffic is predicted 20 times and in a long time horizon, traffic is predicted 50 times. Even the longer time horizon provides more traffic predictions, the shorter time horizon gives more accurate traffic predictions.

[0025] Some embodiments are based on the recognition that route planning is to find optimal route in real road network for a trip based on criteria such as travel time and energy consumption.

[0026] Accordingly, some embodiments of the current invention formulate the route planning problem as an optimization problem to minimize travel time even other metrics such as energy consumption and driving comfort can be optimized. The real road map is converted into the time- dependent graph, in which vertices are intersections or connecting points of any two adjoining road sub-segments and the edges are the road sub-segments connecting two adjacent vertices points. There is at least data collector on each edge. Different from conventional traffic graph built based on road structure and distance, an edge may consist of multiple road-segments and most importantly, the length of the edge is the travel time on the edge. As a result, when traffic condition changes, the length of the edge also changes and therefore, shape of the graph varies as well.

[0027] Some embodiments are based on the recognition that there are uncertainties in vehicular environment. Therefore, traffic models must be trained to handle unexpected events such as traffic accident. It is impractical for data collectors capture all types of unexpected events. However, vehicles can capture these events when they travel on roads.

[0028] Accordingly, route planning model and traffic prediction model can interact with each other to make real time traffic model enhancement. [0029] The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.

[Brief Description of Drawings]

[0030]

[Fig. 1]

Figure 1 shows the intelligent vehicular transportation system, according to some embodiments of the present invention.

[Fig. 2]

Figure 2 shows an example of data clustering method that is used to divide data at each data collector into clusters, according to some embodiments of the present invention.

[Fig. 3 A]

Figure 3 A illustrates the traffic prediction architecture using distributed multi- task learning techniques, according to some embodiments of the present invention.

[Fig. 3B]

Figure 3B demonstrates a prediction time horizon with six prediction periods, according to some embodiments of the present invention.

[Fig. 3C]

Figure 3C shows a schematic illustrating an example of a traffic prediction architecture that includes the learning server connected to the distributed data collectors (learning agents) via the communication network(s), according to embodiments of the present invention.

[Fig. 4] Figure 4 depicts a time-dependent graph to plan the route from departure point vl to destination point v12, according to some embodiments of the present invention.

[Fig. 5 A]

Figure 5 A shows an example edge in the time-dependent graph and the travel time calculation according to some embodiments of the present invention, where the travel time represents the length of the edge in the time-dependent graph.

[Fig. 5B]

Figure 5B illustrates the travel time calculation on a short road segment of the edge in the time-dependent graph such that vehicles can travel though the road segment within one prediction time period, according to some embodiments of the present invention.

[Fig. 5C]

Figure 5C depicts the travel time calculation on a long road segment of the edge in the time-dependent graph such that vehicles need multiple prediction time periods to travel through the road segment, according to some embodiments of the present invention.

[Fig. 6]

Figure 6 shows functional blocks and interaction among the components of the intelligent traffic system, according to some embodiments of the present invention.

[Fig. 7]

Figure 7 depicts multi-task federated learning algorithm for traffic speed prediction, according to some embodiments of the present invention.

[Fig. 8A] Figure 8 A shows the modified A* algorithm for optimal route planning based on time-dependent graph, according to some embodiments of the present invention.

[Fig. 8B]

Figure 8B is the optimal route calculation algorithm used by the modified A* algorithm, according to some embodiments of the present invention. [Description of Embodiments]

[0031] The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.

[0032] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements.

[0033] Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re- arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function’s termination can correspond to a return of the function to the calling function or the main function.

[0034] Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks. [0035] To facilitate the development of intelligent transportation system (ITS), it is imperative to have an accurate prediction for the traffic conditions such as traffic flow and traffic speed. This is due to the fact that, such knowledge can help drivers make effective travel decisions so as to mitigate the traffic congestion, increase the fuel efficiency, and alleviate the air pollution. These promising benefits enable the traffic prediction to play major roles in the advanced traveler information system, the advanced traffic management system, and the commercial vehicle operation that the ITS target to achieve.

[0036] To reap all the aforementioned benefits, the traffic prediction must process the real-time and historical traffic data and observations collected by data collectors and mobile devices. For example, the inductive loop can measure the travel speed by reading the inductance changes over time and such data can be used for the traffic speed prediction. In addition, the wide use of mobile devices (e.g., on-board global position systems and phones) enables the mobility data to be crowdsourced from the general public, further facilitating the unprecedented traffic data collection. Such emerging big data can substantially augment the data availability in terms of the coverage and fidelity and significantly boost the data-driven traffic prediction. The prior art on the traffic prediction can be mainly grouped into two categories. The first category focus on using parametric approaches, such as autoregressive integrated moving average (ARIMA) model and Kalman filtering models. When dealing with the traffic only presenting regular variations, e.g., recurrent traffic congestion occurred in morning and evening rush hour, the parametric approaches can achieve promising prediction results. However, due to the stochastic and nonlinear nature of the road traffic, the traffic predictions of using the parametric approaches can deviate from the actual values especially in the abrupt traffic. Hence, instead of fitting the traffic data into a mathematical model as done by the parametric approach, an alternative way is using the nonparametric approaches where the machine learning (ML) based method is used. For example, a stacked autoencoder model can be used to learn the generic traffic flow features for the predictions. The long short-term memory (LSTM) recurrent neural network (RNN) can be used to predict the traffic flow, speed and occupancy, based on the data collected by the data collector and its upstream and downstream data collectors. Along with the use of RNN, the convolution neural network (CNN) can also be utilized to capture the latent traffic evolution patterns within the underlying road network. [0037] Although the prior arts focus on using advanced deep learning models for the traffic prediction, all of them study the traffic variations with a single-task learning (STL) model. In reality, due to the varying weather, changing road conditions (such as road work and accidents) and special events (e.g., football games and concerts), the traffic patterns on the road can vary significantly under different situations. Hence, using a STL model is not able to capture such diverse and complex traffic situations. Moreover, due to the limited on-chip memory available at the data collector and mobile, the local training data can be extremely insufficient and a promising prediction performance cannot be achieved. In addition, the prior arts assume that the data collected by a data collector can be shared with other data collectors or a centralized unit, like the data collector accessing the data from its upstream and downstream data collectors. However, the collected data can contain the personal information, like the driving license plates captured by cameras and history trajectory of mobile phone users. In this case, directly sharing the traffic data among data collectors can raise the privacy concerns. Meanwhile, the communication cost is another major concern.

[0038] Figure 1 shows the intelligent vehicular transportation system 100 and its components as well as the interactions among the components of the system according to some embodiments of the current invention. The traffic system 100 includes road network 105, distributed data collectors 110, learning servers 115 and vehicles 120. The distributed data collectors 110 may be referred to as local traffic prediction agents (or learning agents) 110. Data collectors 110 are learning agents and deployed in the road network 105. The road network 105 consists of roads, roadside units, edge computing devices, etc. Vehicles 120 travel on the roads of the road network 105. The learning servers 115 can be located remotely or along roadside. The learning servers 115 design/update traffic models, data clustering rules and aggregate the trained traffic models and traffic prediction. The learning servers 115 distribute the data clustering rules to data collectors 110 to divide data at each data collector into different clusters via communication networks 112. The communication network 112, which can be wired network or wireless network, is configured to connect among the distributed data collectors 110 and the learning servers 115, among the learning servers 115 and the vehicles 120, among the distributed data collectors 110 and the road network 105, among the road network 105 and the vehicles 120, or any/all among combinations thereof. Further, the communication network 112 may be the 4 th generation core networks or the 5 th generation core networks or the beyond 5 th generation core networks, and may connect to roadside units (RSUs/ DSRC (Dedicated Short Range Communications) transceiver) being arranged along roads or pedestrian passageways (not shown) or/and edge computing systems (not shown). In some cases, the roadside units may include cameras arranged on the road to monitor the vehicles traveling the roads or arranged on intersections monitoring pedestrians.

[0039] Learning servers 115 also distribute the traffic models to data collectors 110 for distributed training. Each data cluster is used to train a traffic model, e.g., rush hour data is used to train rush hour traffic model. Data collectors train traffic models using their local data and send locally trained traffic models to learning servers to build common global traffic models. Learning servers 115 distribute the final traffic models to data collectors 110 for traffic prediction. Each data collector predicts traffic on the road-segment where the data collector is located. Data collectors send their traffic predictions to learning servers, which then combine all traffic prediction to build the global traffic prediction for route planning. The global traffic prediction is distributed to vehicles 120 for route planning. When vehicles 120 travel on the planned routes, they can provide the learning servers with information such as traffic accident. The learning servers 115 can then coordinate data collectors to update traffic models.

Multi-horizon Traffic Speed Prediction

[0040] Consider a set of N data collectors where the data collector can be the toll station, loop detector, camera, etc. To capture the road traffic dynamics over time, data collector measures the average speed x n (t) at time t for all vehicles traversed in the past time period (e.g., = 5 mins) and performs the speed prediction. Assume the data sample used for the prediction to be with lag variable as 1 when the data collector n predicts the future speed at time t. Assume the multi-horizon speed prediction to be , ..., as the predicted speed value and h as the maximum prediction time horizon.

[0041] To guarantee that the data collector can make accurate speed predictions, the data collectors use the machine learning model to train the local traffic data and solve the following optimization problem: where S n is the total number of training data samples within the local data at data collector n, is the i-th input data sample, is the i-th target output speed data, and f(w, x„ ,i , y n,i ) is the loss function when the machine learning model with model parameters w is trained with data (x n,i , y n,i ). The loss function plays a pivotal role in determining the machine learning performance, and the expression of the loss function is application specific. In the traffic prediction, the most common loss function is the mean squared error (MSE). For the purpose of traffic management such as route planning, the data collectors will send the speed predictions to the learning server over either wired network or wireless network. To avoid a large overhead over the learning server, the frequency that the data collectors share the forecast results with the learning server should be relatively low, e.g., 1 hour. As follows, the learning server can broadcast the road map with traffic predictions, e.g., time- dependent graph, to the vehicles operating within its coverage. The on-board unit (OBU) inside the vehicle can then choose the optimal route from its current location to the destination with the shortest travel time.

[0042] To tackle the insufficient local training data and protect data privacy, some embodiments of the current invention apply distributed machine learning technique to solve problem (1). There are different distributed machine learning techniques, some embodiments of the current invention use federated learning (FL) as an example to illustrate distributed machine learning approach. Traffic data collected by the data collectors can have a strong spatial-temporal dependence. To capture different traffic situations existing in the collected traffic data, a multi-task FL model is provided, in which different learning models are designed for different traffic situations, e.g., a rush hour model is different from an off hour model. Furthermore, these learning models may be correlated, e.g., off hour traffic can impact rush hour traffic.

[0043] To facilitate the multi-task learning, the training data are partitioned into different clusters such that each cluster corresponds to a learning model, e.g., rush hour data are used to train the rush hour model. Data clustering is important for many reasons, e.g., off hour data is not desirable to train rush hour traffic model, local traffic data is not suitable to train freeway traffic model. There are different ways to cluster data. Figure 2 illustrates a data clustering method 200 for traffic prediction, where location information is reflected by the location of the data collector 110. Different data collectors are located at different locations. The data 210 at a data collector is divided into summer data 220 and winter data 230. This level of data clustering reflects the weather. Each of summer data and winter data is divided into weekend data 240 and weekday data 250. This level of data clustering reflects day of week. Each of weekend data 240 and weekday data 250 is further divided into rush hour data 260 and off hour data 270.

[0044] The learning server and data collectors collaboratively train the multi-task FL models for traffic prediction. Assume data is portioned into M clusters. The objective of the multi-task FL is to solve the following optimization problem for each data cluster m: with F m (w m ) defined as where

(xm,n, b ym,n,0 is the i-th training data sample belonging to cluster m at data collector n with S m,n as the total number of such data samples. S(m) refers to the total number of training data samples belonging to cluster m across all data collectors, F m,n (w m ) denotes the loss function of cluster m at data collector n and K is the number of the model parameters.

[0045] To solve problem (2), the multi-task FL algorithm uses an iterative update scheme. Figure 7 shows a multi-task federated learning algorithm for traffic speed prediction, according to some embodiments of the present invention.

[0046] The learning server first generates an initial global learning model with model parameters as wm,0 for cluster m and sends wm,0 to the data collectors. At the first learning round, i.e., j = 1, all data collectors use the received model parameters wm,0 to update the learning models based on their own local data of cluster m by using the gradient descent: (7) where h is the learning rate. The data collectors will send their trained model parameters to the learning server, which will aggregate all the received local modal parameters to update the global model parameters, given by:

[0047] The global model parameters are then sent to data collectors for next round of learning. Each learning round will be followed by another round, and the same process repeats among the learning server and the data collectors in each round until the total loss function Fm(wm) for each cluster m is sufficiently small.

[0048] Figure 3 A illustrates the traffic prediction architecture 300 using distributed multi-task learning techniques according to some embodiments of the current invention, where an infrastructure device acts as learning server 115 to coordinate a set of data collectors 110, which serve as learning agents. In the figure 3A, there are N data collectors, data collector 1 is a toll station, data collector 2 is a loop detector, and data collector N is a camera. The learning server 115 designs M traffic models 310 such that each traffic model is targeted for a specific task, e.g., a model for rush hour traffic and another traffic model for off hour traffic, a model for city traffic and another model for freeway traffic. Therefore, these traffic models are different but correlated because traffic at one location can impact on traffic at other locations and traffic at one can impact on traffic at another time. Accordingly, the learning server 115 defines data clustering rules 320 to divide data into M clusters 330. The learning server 115 then distributes the rules 320 and traffic models 310 to data collectors 110. Correspondingly, each data collector 110 divides its data into M clusters 330. It is possible that some of data clusters are empty. In that case, the data collector will not train corresponding traffic model. In the figure 3A, data collector 1 has all data clusters, data collector 2 does not have cluster-M data and, and data collector N does not have cluster- 1 data. As a result, data collector 1 will train all traffic models, data collector 2 does not train traffic model M, and data collector N does not train traffic model 1. After clustering its data, each data collector 110 starts the first round of the distributed training. Each data collector 110 trains the received traffic models independently by using its own data without sharing its data with other data collectors and learning server. After certain iterations of training, each data collector sends the locally trained traffic models 340 to the learning server, which then aggregates the received traffic models from all data collectors to generate the common global traffic models 350. Upon completion of the model aggregation, the learning server 115 re-distributes the aggregated traffic models 350 to data collectors for the second round of training. This process of training and aggregation continues until the robust traffic models are built. Once the final traffic models are built, the learning server 115 distributes the models to data collectors 110 for traffic prediction. The predicted traffic depends on applications and can be velocity, traffic flow, number of specific vehicles, etc. For route planning application, the traffic speed is predicted. Finally, data collectors send their local traffic predictions 360 to the learning server to build a global traffic prediction model, i.e., time- dependent graph, that is then distributed to vehicles 120 for route planning. [0049] The traffic prediction is characterized by two parameters, prediction time horizon and prediction period. The prediction time horizon represents the farthest time the traffic is predicted and prediction period indicates how often the traffic is predicted within a prediction time horizon. Prediction periods make up a prediction time horizon. Figure 3B shows an example of prediction time horizon 370 that consists of six prediction periods 380 according to some embodiments of the present invention. Even the longer time horizon provides more traffic predictions, the shorter time horizon gives more accurate traffic predictions. Because with a longer prediction time horizon, a longer traffic series in further future needs to be predicted. Such expansion on the prediction time horizon will inevitably degrade the prediction performance.

[0050] Figure 3C shows a schematic illustrating an example of a traffic prediction architecture 300 that includes the learning server 115 connected to the distributed data collectors (learning agents) 110 via the communication network(s)l 12, according to some embodiments of the present invention. [0051] The learning server 115 may be referred to as a distributed machine learning based traffic predication server. The learning server 115 is configured to provide map information with respect to traffic predictions to vehicles 120 traveling on a road network. The learning server 115 may include one or more processors 121, a memory 140, a memory unit/storage 200 configured to store traffic prediction neural networks 132, traffic data cluster rules 134, global time-dependent map 135, trained multi-task traffic models 136, global map of road network 137, an input interface (or transceiver) 150 configured to communicate with the learning agent 110 via the communication network 112 and update model parameters of the traffic prediction neural networks 132. The trained multi-task traffic models 136 may be the traffic models 310 shown in Figure 3 A. [0052] The learning server 115 is configured to update the parameters of the traffic prediction neural networks 132 by acquiring the trained parameters of the trained multi-task traffic models 173 that have been trained by the learning agents 110 via the input interface 150 and the communication network 112. This update process is iteratively continued based on every predetermined elapsed time periods.

[0053] In this case, each of the learning agents 110 is arranged at a location on the road network 105 and is configured to locally train multi-task traffic models 174 by collecting traffic data (traffic patterns of vehicles) and clustering traffic data at the arranged location. Further, the learning server 115 generates a global time-dependent traffic map 135 based on the updated- trained multi-task traffic models 136 and distributes the global time- dependent traffic map 135 to vehicles 120, which then determine the optimal routes based on their own driving plans by using the modified A* algorithm shown in Figures 8 A and 8B and the global time-dependent traffic map 135. An optimal route is a driving route on road network 105 from vehicle’s current location to its destination with the least travel time.

[0054] Each of the learning agents 110 may include an interface/transceiver 151 configured to perform data communication with the learning server 115 via the communication network 112. Each learning agent 110 further includes one or more processors 160, a memory 180 connected to a memory unit/storage 170 storing traffic data 171, traffic prediction neural networks 173, trained multi-task traffic models 174, local map of road network 175 and a local time-dependent map 172.

[0055] In some cases, a computer-implemented distributed machine learning based traffic prediction method can be provided for predicting traffic of roads by using one or more hardware that include one or more processors in connection with a memory/memory unit/storage storing instructions/programs that cause the one or more processors to perform steps. The steps may include distributing global multi-task traffic models 136 to the learning agents 110 the learning server 115 via the communication network 112. Each of the learning agents 110 is configured to locally train the traffic models 136 (310) based on the data signals acquired from the road sub- segments 190, the edge computing devices 185 and the vehicles 120 traveling on the roads. The steps further include uploading/acquiring the locally trained traffic models 173 trained by the learning agents 110 from the learning agents 110 to the learning server 115, updating the global multi-task traffic models 136 by the learning server 115 using the locally trained traffic model parameters of the trained multi-task traffic models 174. The steps further include generating a time-dependent global traffic map 135 by the learning server 115 using the well trained global multi-task traffic models 136, distributing the time-dependent global traffic map 135 to each of the vehicles 120 traveling on the roads, and computing an optimal travel route with the least travel time by each of the vehicles 120 using the time-dependent global traffic map 135 based on a driving plan of each of the vehicles 120.

Optimal Route Planning Based on Predicted Travel Time

[0056] Traffic speed on a road network varies as the time, e.g., rush hour traffic speed is lower than off hour traffic speed in general. Therefore, traffic map can be modeled as time-dependent graph by using physical road network and the predicted traffic speed. To build the time-dependent graph, the learning server uses the multi-horizon speed predictions from traffic data collectors and divides the road segments info multiple sub-segments such that the traffic of each road sub-segment is predicted by a unique data collector exclusively located at the sub-segment. Then, the road network is modeled as a time-dependent graph G = (V, ε, W), where the set V of vertices includes the intersections and connecting points of any two adjoining road sub- segments, the edge set 8 is thereby the road sub-segments connecting two adjacent vertices and W is the weight set. For an edge e e 8, the weight we(t) e W is modeled as the travel time on the edge e at time t, calculated as the ratio between the length of the road sub-segment and the predicted speed. For instance, a road sub-segment 190 in road network 105 can be a road section on a single road or multiple connected road sections of on multiple roads. Different from the static graph where the weight associated to each edge is a constant value, the counterpart within the graph G is a time-varying variable due to the time-varying speed, e.g., the piecewise linear speed as shown in Figure 5C, to traverse each road sub-segment. Different from static graph built based on road map and distance, the vertices can be dynamically selected so that they are not fixed points in road network as long as there is at least data collector between any two vertices. Accordingly, an edge in the time-dependent graph may consist of multiple road-segments. Most importantly, the length of the edge is not the physical distance, instead it is the travel time on the edge. As a result, when traffic condition changes, the length of the edge also changes, e.g., the length of same edge may be much longer in rush hour than in off hour. Finally, the learning server distributes the time- dependent graph to vehicles in its communication coverage for route planning. [0057] Figure 4 shows an example of time-dependent graph 400 according to some embodiments of the present invention. In this case, vertices 410 are represented by v 1 , v 2 , etc., there is a data collector 420 between any two vertices, the length of each edge 430 is denoted as w x,y (t x ) 440 with x denotes starting vertex, y denotes ending vertex and time t x denotes the time the vehicle leaves vertex x. In the figure, the vehicle departures from vertex v 1 and destines to the vertex v 12 . It can be seen that there are many different routes from vertex v 1 to vertex v 12 . The route planning is to find a route with the minimum travel time. In the figure, route 450 represents the minimum travel time route, on which vehicle departures from Vi at time ti and arrives at V 3 at time t 3 , departures from V 3 at time t 3 and arrives at vj at time t-j, departures from V 7 at time t 7 and arrives at v 10 at time t 10 , and finally, departures from v 10 at time tio and arrives at the destination vn at time t 12 . The data collector c 1,3 is located on the edge from vj to V3, the data collector 0 3,7 is located on the edge from V 3 to V 7 , the data collector C 7,10 is located on the edge from V 7 to v 10 , and the data collector c 10,12 is located on the edge from v 10 to V12.

[0058] According to the time-dependent graph G, the vehicle can determine the route leaving the current location s at time t s to the destination d with the least travel time as follows: s.t. where t d (s,t s ) denotes travel time leaving location s at time t s to destination location d, the constraint (10) is due to the fact that the vehicle departures s at time t s and the constraint (11) represents that the arrival time at equals the sum of the departure time at and the travel time on road sub-segment at time f. Solving optimization problem in (9) is different from the route planning problem in a static graph, the optimization problem (9) focuses on a time-dependent graph where the weights are time-varying.

[0059] Figure 8 A shows the modified A* algorithm for optimal route planning based on time-dependent graph, and Figure 8B is the optimal route calculation algorithm used by the modified A* algorithm, according to some embodiments of the present invention. A modified A* algorithm is configured to find the optimal route with the least travel time. Within the searching algorithm, the arrival time g v and the heuristic total travel time l v , v ∈ V, are initially set to infinity with the exception of the starting point s with g s = t s and l s = g s +h d (g s )· The heuristic total travel time is defined as the sum of arrival time and heuristic travel time h d to the destination. The heuristic travel time h d to the destination is calculated as the ratio between the Euclidean distance to the destination and the maximum speed. Then, the searching process in the modified A* algorithm begins with the starting point s and extend to the adjacent vertices that have adjoining road sub-segments with s. For these adjacent vertices, their arrival time g will be updated by comparing the most recently assigned arrival time with the arriving time when taking the route from the starting point s. Meanwhile, their heuristic total travel time 1 is updated as well. Next, the vertex with the least heuristic total travel time within the neighboring vertices will be selected to continue the searching process. The same process will be repeated. Finally, when reaching the destination point d, the searching process stops and return the optimal route selection and its travel time estimation.

[0060] The key for time-dependent graph based route planning is to compute the length of the route, i.e., the travel time on the route. Unlike existing route planning methods that use present traffic conditions to plan route, the embodiments of the current invention use the traffic prediction to make optimal route planning. For an edge in the time-dependent graph 400, the corresponding data collector predicts traffic speed every At time period. Therefore, the length of the edge, i.e., the travel time, is dynamically computed using traffic predictions. As a result, the length of the edge in time- dependent graph varies as time changes and therefore, the shape of graph changes as well, which indicates that travel time on a route also changes with the time. [0061] There are different ways to calculate travel time on a road segment. Figures 5A, 5B and 5C show a way to calculate travel time by using predicted piecewise linear traffic speed function, according to some embodiments of the present invention. Figure 5A shows an edge e 500 in the time-dependent graph 400, where the start vertex of the edge is v s 510 and the end vertex of the edge is v e 520. The edge e consists of two road segments Ri 530 and R 2 540. These two road segments are connected by a data collector 550, which predicts traffic speed for both road segments R 1 and R 2 . The road segment Ri is short such that vehicles can pass through Ri with less than one At time period and its physical distance is Di 560. The road segment R 2 is long such that vehicles need more than one At time period to pass through R2, e.g., 7 At time periods, and its physical distance is D 2 570. Assume a vehicle arrives at vertex v s at time t s and passes through R 1 within one prediction time period At. The predicted traffic speed at time t s is so. Thus, the travel time on road segment Ri is Di/so. As a result, the vehicle arrives at data collector 550 at time t c = t s +D1/s o . Therefore, the length w VSjve (t s ) 580 of the edge e in the time-dependent graph can be calculated as w vs,ve (t s ) = w vs,c (t s ) + w c,ve (t c ), where w vs ,c(ts) = Di/so as shown in Figure 5B. The calculation of w c,Ve (t c ) is illustrated in Figure 5C, where vehicle arrives at data collector at time t c . Assume the time t c is within a prediction period. The length w c,ve (t c ) is computed using a piecewise speed function, where si is the predicted speed at time t c and vehicle only travels aAt time in the first prediction period with 0 < a < 1 , S 2 is the predicted speed at time t c +a At, S 3 is the predicted speed at time , S 4 is the predicted speed at time t ,ss is the predicted speed at time , S 6 is the predicted speed at time t and, and finally, S7 is the predicted speed at time t and vehicle only travels bDί in the last prediction period with 0 < β <1. Therefore, the travel time w c,ve (t c ) on the road segment R 2 equals to Accordingly, the travel distance in the first prediction period is d 1 = , the travel distance in the second prediction period is d 2 = , the travel distance in the third prediction period is d 3 = , the travel distance in the fourth prediction period is d 4 = , the travel distance in the fifth prediction period is ds = At*s 5: the travel distance in the sixth prediction period is d 6 = and the travel distance in the last prediction period is d 7 = Summation of these distances equals to the physical distance of road segment R 2 , i.e., d 1 +d 2 +d 3 +d 4 +d 5 +d 6 +d 7 = D 2 . Finally, the length of the edge e w vs , ve (t s ) = Di/so + d 1 /S 1 +d 2 /S 2 +d 3 /S 3 +d 4 /S 4 +ds/S 5 +d 6 /S 6 +d 7 /S 7 = D 1 /S 0 + .

[0062] Figure 6 shows functional blocks and interaction among the components of the intelligent traffic system 100 shown in Figure 1 according to some embodiments of the current invention, where each data collector 110 has a local database 600 and learning server 115 designs 605 traffic models and corresponding rules to cluster data. The learning server then distributes 610 the data clustering rules and traffic models to data collectors and coordinates multiple round of distributed training. Upon receiving the rules and traffic models, each data collector 615 clusters its local data and trains the traffic models using its local data as shown in Figure 2. After completion of the distributed training, the learning server builds 620 global traffic models by aggregating locally trained traffic models. The learning server then distributes 625 global traffic models to data collectors, which make 630 local traffic predictions. The local traffic predictions are sent 635 to learning server. The learning server builds time-dependent graph 640 by using road network 105 and traffic predictions from data collectors and distributes the time-dependent graph to vehicles 120 for route planning. The vehicles plan 645 their routes for the minimum travel time by using time-dependent graph. The vehicles travel 650 on planned routes to arrive at destinations with the minimum travel time. When vehicles travel on their planned routes, certain unexpected events such as new road construction and traffic accident can occur. If observed events cause travel time mismatch with the planned travel time, vehicles can feedback 655 these events to the learning server to update the traffic prediction.

[0063] The above-described embodiments of the present disclosure can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. Use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.

[0064] Although, the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the append claims to cover all such variations and modifications as come within the true spirit and scope of the present disclosure.

[0065] Also, the embodiments of the present disclosure may be embodied as a method or a computer-implemented method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.