Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PREDICTING ATTAINABLE LOCATIONS
Document Type and Number:
WIPO Patent Application WO/2015/181568
Kind Code:
A2
Abstract:
A method of predicting attainable locations for a vehicle, based upon its current location, the prediction using map data representing a map of a physical area around the current location and the map data comprising a plurality of navigable segments traversable by a vehicle, the method comprising obtaining location data determining the location of the vehicle within the map represented by the map data; estimating the energy required to traverse each of the navigable segments within the map data; processing the map data together with the energy estimate to provide the energy requirement for substantially every passable route of the vehicle from its current location to each destination within the area; monitoring an energy source of the vehicle to determine the energy remaining therein; processing the remaining energy available together with the energy requirement for substantially every passable route to determine attainable locations for the vehicle; and outputting the attainable locations for the vehicle.

Inventors:
ONDRUSKA, Peter (Ewert HouseEwert Place,Summertown, Oxford Oxfordshire OX2 7SG, GB)
POSNER, Ingmar (Ewert HouseEwert Place,Summertown, Oxford Oxfordshire OX2 7SG, GB)
Application Number:
GB2015/051575
Publication Date:
December 03, 2015
Filing Date:
May 29, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ISIS INNOVATION LIMITED (Ewert House, Ewert PlaceSummertown, Oxford Oxfordshire OX2 7SG, GB)
International Classes:
F25J1/00
Attorney, Agent or Firm:
GOSNALL, Toby (100 Hagley Road, Edgbaston, Birmingham West Midlands B16 8QQ, GB)
Download PDF:
Claims:
CLAIMS

1. A method of predicting attainable locations for a vehicle, based upon its current location, the prediction using map data representing a map of a physical area around the current location and the map data comprising a plurality of navigable segments traversable by a vehicle, the method comprising: i) obtaining location data determining the location of the vehicle within the map represented by the map data; ii) estimating the energy required to traverse each of the navigable segments within the map data; iii) processing the map data together with the energy estimate to provide the energy requirement for substantially every passable route of the vehicle from its current location to each destination within the area; iv) monitoring an energy source of the vehicle to determine the energy remaining therein; v) processing the remaining energy available together with the energy requirement for substantially every passable route to determine attainable locations for the vehicle; and vi) outputting the attainable locations for the vehicle.

2. A method according to claim 1 in which the attainable locations for the vehicle are updated as the vehicle traverses a navigable segment.

3. A method according to claim 1 or 2 in which the navigable segments are modelled as a network such that a trajectory through the network corresponds to a passable route for the vehicle.

4. A method according to claim 3 which comprises transposing the network of navigable segments and constructing Markov Decision Processes (MDPs) based on the transposed network. 5. A method according to claim 4 in which the MDP is solved to determine the expected energy requirement of each passable route .

6. A method according to claim 4 or 5 wherein the MDPs are defined in terms of:

(a) the set of states composed of all navigable segments within the map, S = {si, s2, , sN};

(b) the set of available actions composed of all possible turns at ends of navigable segments, A = {ah a2, aM};

(c) a deterministic transition model, P; and

(d) a reward structure, R, with one reward associated with each state-action pair, wherein the rewards express the energy cost of choosing a particular action in a particular state.

7. A method according to any preceding claim in which the determined vehicle range is plotted on a map.

8. A method according to any preceding claim in which the steps i) to vi) are provided in the order shown.

9. A method according to claim 1 in which, in step (v), energy requirements for each route are weighted based on driver preferences.

10. A method, directly or indirectly, according to claim 3 which is arranged to use the map data to generate a model of the choices of a given driver as a plurality of state- action pairs, with at least one pair being associated with a node of the network, and wherein the probability of the given driver taking a particular trajectory is expressed as the probability of observing the corresponding state-action sequence through the network.

1 1. A method according to claim 10 in which a learned characteristic is used to determine the probability of the given driver taking a particular trajectory.

12. A method according to claim 1 1 in which the learned characteristic includes data recording the selection of segments along which the given driver chooses to drive the vehicle .

13. A method according to claim 1 1 or 12 in which the learned characteristic is updated from data which is recorded by a processing apparatus as the vehicle, to which the method is being applied, is driven by the given driver. 14. A method according to any preceding claim in which a first Markov Decision Process (MDP) is used to model trajectory preferences and energy consumption.

15. A method according to claim 14 which uses a second MDP, related to the first, and consisting of the same set of states, actions and rewards but with a transition model P' which encodes the transpose of the original road graph.

16. A method according to any preceding claim in the energy requirement required for the vehicle to traverse a segment is modelled using a distribution. 17. A method according to claim 16 in which the method calculates, using the distribution, a probability with which the vehicle will reach a destination within the map.

18. A method according to claim 16 or 17 in which the distribution used for the vehicle is learnt from the previous performance of the vehicle .

19. A vehicle including a processing apparatus, the processing apparatus being arranged to predict attainable locations for the vehicle, based upon its current location, the prediction using map data representing a map of a physical area around the current location and the map comprising a plurality of navigable segments traversable by a vehicle, in which, in use, the vehicle is operated, the processing apparatus being further arranged to perform the following steps: i) obtain location data determining the location of the vehicle within the map represented by the map data; ii) estimate the energy required to traverse each of the navigable segments with the map; iii) process the map data together with the energy estimate to provide the energy requirement for substantially every passable route of the vehicle from its current location to each destination within the area; iv) monitoring an energy source of the vehicle to determine the energy remaining therein; v) process the energy available together with the energy requirement for substantially every passable route to determine attainable locations for the vehicle; and vi) output the attainable locations for the vehicle.

20. A machine readable medium containing instructions which when read by a processing apparatus cause that processing apparatus to perform the method of any of claims 1 to 17.

Description:
PREDICTING ATTAINABLE LOCATIONS

This invention relates to predicting attainable locations for a vehicle. In alternatives, the invention may be thought of as determining the range of a vehicle . In particular, but not exclusively, the invention relates to predicting attainable locations and/or determining the range of an electric vehicle, perhaps over substantially all of the routes open to that vehicle from its location. The invention may also relate to learning driver preferences on which range estimation can be based.

It is convenient to describe the background of the invention in relation to electric vehicles but the invention is not necessarily so limited and may be applied to vehicles of other fuel types. According to recent market forecasts the number of electric vehicles (EVs) on the roads worldwide is set to increase from around 150,000 in 2013 to over two million by 2020. The adoption of this technology is driven largely by environmental, economic and political factors. However, recent studies have shown that one of the primary impediments to such mass-market adoption is range anxiety due to inaccurate in- situ estimates of available vehicle range. As a result, many studies now exist aimed at modelling the energy consumption and factors influencing it such as well as a driver's likely acceleration profile. Often, these systems provide the driver with an indication of attainability for a particular destination specified by the driver.

According to a first aspect of the invention there is provided a method of determining a range for a vehicle using map data representing a map of a physical area, the method comprising at least one of the following steps: i) obtaining location data determining the location of the vehicle within the map represented by the map data; ii) processing the map data to provide the energy requirement for substantially each possible route of the vehicle from its location to a destination within the area; and iii) processing the energy available to the vehicle to determine the range along each of the possible routes for the vehicle . The steps i) to iii) may be performed in the order shown.

Embodiments providing such a method are believed advantageous because they allow a driver thereof to be able to determine whether or not any destination is within the vehicle's range without the need to enter a specific destination. That is, knowing the range allows a determination to be made as to whether a location is attainable or not. Such embodiments are therefore more convenient than the prior art and the technical problem of how to accurately estimate whether or not any possible destination is within the vehicle' s range is addressed.

Moreover, embodiments may be arranged to automatically output attainable locations such that a driver may simply driver his/her vehicle and have the attainable locations updated as he/she drives.

Herein, possible route may be taken to mean a route ordinarily thought of as being passable by the vehicle that is using the method; ie the route is navigable. Accordingly, a car that is ordinarily driven along a road would not be ordinarily consider routes along farm tracks, unpaved roads and the like . A Heavy Goods Vehicle, or Passenger Service Vehicle, may not ordinarily consider routes along small single track roads and the like.

Embodiments providing the method of the first aspect may be thought of as summing the expected energy expenditure along each of the navigable segments within the map data. Typically, the map data will contain a plurality of navigable segments with each segment representing at least a portion of a navigable road. For example, a segment of a road may exist between junctions at which a driver has a choice of direction. As such, when a vehicle is on a segment its destination is known; i.e. the end of the segment, unless the vehicle were to stop on the segment itself.

Moreover, the skilled person will appreciate that here substantially each is considered to mean that substantially each of the segments that are input to the method will be considered. Should segments present within the map data be suppressed from consideration by the method then the method will still consider substantially each of the segments input thereto. Conveniently, at least some embodiments calculate the range along each of the possible routes within the map data, in real-time or at least substantially in real-time . Here realtime is intended to mean as a driver drives the vehicle . It is likely to take a vehicle several seconds to reach the next junction of the navigable route along which it is travelling, which may be thought of as being a segment of the map data. It is convenient if the calculation of the range can be updated in the time that it typically takes the vehicle to reach the next junction (i.e. the time taken to traverse an average segment) and this may be thought of as being real-time. For example, real-time may mean on the order of substantially any of the following times: 2 seconds; 5 seconds; 10 seconds; 15 seconds; 30 seconds.

The skilled person will appreciate that the larger the physical area represented by the map data, the more segments covered, etc. the more processing will be required to process the map data. As such, at least some of the embodiments may constrain the map data to be considered by the method in order that the method can be performed in a short enough period. Here short enough may be considered to mean in real-time as described above.

Constraining the map data may mean limiting the segments covered by the method. For example, the segments considered might be limited to those considered by a driver's driving preferences. For example, if the drivers driving preferences indicate that a driver does not drive on motorways then the method may not consider motorways.

Alternatively, or additionally, constraining the map data may comprise limiting the map data considered to being a predetermined area. The range of the vehicle may constrain the area considered. For example, should the range of the vehicle be on the order of a few tens of kilometres then area considered might also be a few tens of kilometres. Conveniently, the map data covers a larger enough area to include the locations attainable by the vehicle. In some embodiments, the area may be a substantially circular area centred upon the vehicle. In such embodiments, the circular area may have a radius of substantially any of the following, in kilometres: 30, 40, 50, 60, 70, 80, 90, 100, 150, or the like.

The vehicle range may be plotted on a map providing a convenient user interface on which a driver can readily interpret the remaining range for the vehicle . Conveniently, the map data comprises a plurality of navigable segments traversable by a vehicle and in which the method models the navigable segments within the map data as a network such that the trajectory through the network corresponds to a route for the vehicle. Such a method provides a convenient structure which can be processed computationally efficiently.

The map data may be used to generate a model of the choices of a driver as a plurality of state-action pairs, with at least one pair being associated with a node of the network, and wherein the probability of a driver taking a particular trajectory is expressed as the probability of observing the corresponding state-action sequence through the network.

At least some embodiments may additionally use a learned characteristic to determine the probability of a driver taking a particular trajectory. Such embodiments are convenient as they allow the method to be tailored to a particular driver, which may be thought of as being a given driver.

Conveniently, the learned characteristic is updated from data which is recorded by a processing apparatus (or otherwise made available) as the vehicle, to which the method is being applied, is driven by a given driver.

Some embodiments may use a first Markov Decision Process (MDP) to model trajectory preferences and energy consumption. A second MDP, related to the first, and consisting of the same set of states, actions and rewards but with a transition model P' which encodes the transpose of the original road graph may also be used.

According to a second aspect of the invention there is provided a vehicle including a processing apparatus, the processing apparatus being arranged to determine a range for the vehicle using map data representing a map of a physical area in which, in use, the vehicle is operated, the processing apparatus being further arranged to perform the following steps: i) obtain location data determining the location of the vehicle within the map represented by the map data; ii) process the map data to provide the energy requirement for substantially each possible route of the vehicle from its location; and iii) process the energy available to the vehicle to determine the range along each of the possible routes for the vehicle . According to a third aspect of the invention there is provided a machine readable medium containing instructions which when read by a processing apparatus cause that processing apparatus to perform the method of the first aspect of the invention.

According to a fourth aspect of the invention there is provided a method of determining a range for a vehicle using map data representing a map of a physical area, the method comprising: i) obtaining location data determining the location of the vehicle within the map represented by the map data; ii) processing the map data to determine the range of a vehicle to at least one destination, the range being based upon a learned characteristic providing a drivers driving preferences; iii) updating the learned characteristics as the vehicle performs the journey.

Embodiments providing such an approach account for driver-specific route preferences and/or route-specific factors by integrating over driver-specific distributions over possible trajectories to every possible destination.

Some embodiments may use a Markov Decision Process (MDP) to optimise an a priori unknown reward structure in order to update the learned characteristics as the vehicle is used. Conveniently the reward structure used in the MDP is a weighted linear combination of features. Advantageously, the use of an MDP framework allows for at least one of the following:

(i) a principled way of learning the route preferences (as described in Ziebart, B . D. ; Maas, A. L. ; Bagnell, J. A. ; and Dey, A. K., 2008. "Maximum entropy inverse reinforcement learning". In Proceedings of the AAAI Conference on Artificial

Intelligence (AAAI), 1433- 1438.);

(ii) a convenient way of encoding the energy usage prediction problem; and (iii) a computationally efficient (real-time capable) method of computing driver- specific energy usage predictions to substantially every destination in a map taking into account all passable routes to destinations.

Advantageously, being able to calculate energy usage predictions for every destination for all routes means that the driver does not have to specify where he/she is going and what route would be preferred. Such embodiments may therefore automatically display attainable locations as a driver uses his/her vehicle.

Alternatively or additionally, other methods may be used instead of a Markov Decision Process. Additionally or alternatively, some embodiments may use Inverse Reinforcement Learning (IRL) to update the learned characteristics as the vehicle is used. Advantageously, IRL provides a principled, generalisable and efficient mathematical framework for updating the leaned characteristics. Optionally, other techniques could be used, such as regression (e.g. using Gaussian Processes) .

Some embodiments may additionally use the expected energy usage of the vehicle to travel along a route in order to determine the range / predict attainable locations.

Conveniently, the method determines the range / determines attainable locations along each of the routes open to a vehicle from its present location.

In some embodiments, a confidence level with which a particular driver will be able to reach a particular destination is calculated. The confidence level can also be referred to as a probability of attainability. Preferably, a current state of charge of the battery of an electric vehicle is used as an input for the calculation of the confidence level. Advantageously, the confidence level is computed for all destinations in the map, specified by the map data, simultaneously. At least one of the following is considered in a model used for calculating the probability of attainability of a location:

(i) the driver (eg the learned characteristics);

(ii) the environment;

(iii) on-board auxiliary systems; and

(iv) vehicle battery systems.

These factors may be treated as potential sources of estimation noise in the model used for calculating the confidence level.

Advantageously, the calculation of the confidence level allows the creation of a confidence level map (which may also be thought of as a probabilistic attainability map or a probabilistic range map) in which the confidence level is marked or otherwise recorded with reference to the map.

Optionally, the confidence level is calculated using a feature-based linear regression framework. Advantageously, this allows for a computationally efficient implementation capable of providing real-time updates of the resulting confidence level map, wherein real-time may have the same meaning as described elsewhere herein.

In some embodiments, one or more regions and/or lines may be marked or otherwise highlighted on the map to generate the confidence level map. The regions and/or lines may be marked by the use of different colours, shadings, lines, line types or other visual indicators.

Optionally, the regions relate to different ranges of confidence level that a destination can be reached. For example, a 90-95% confidence level may be presented, wherein locations within the region or regions are calculated as being attainable with a probability of 90-95%. In some embodiments, the regions are delineated by lines. In other embodiments, the regions may merge from one to the next. Optionally, the regions in such embodiments are marked by colour or shading variations. Advantageously, a scale is provided indicating to which confidence level each visual indicator relates. The skilled person will understand that any percentage or percentage range may be chosen, and any number of lines and/or regions. According to a fifth aspect of the invention there is provided a vehicle including a processing apparatus, the processing apparatus being arranged to determine a range for the vehicle using map data representing a map of a physical area in which, in use, the vehicle is operated, the processing apparatus being further arranged to perform the following steps: i) obtaining location data determining the location of the vehicle within the map represented by the map data; ii) processing the map data to determine the range of a vehicle to at least one destination, the range being based upon a learned characteristic providing a drivers driving preference; iii) updating the learned characteristics with the vehicle performs a journey. According to a sixth aspect of the invention there is provided a machine readable medium containing instructions which when read by a processing apparatus cause that processing apparatus to perform the method of the fifth aspect of the invention.

The machine readable medium of any of the aspects and/or embodiments described herein may be any of the following: a DVD ROM/RAM (including -R/-RW and +R/+RW); a CD; a memory (including a USB flash drive; and SD card; a compact flash card; or the like); a hard drive (including both those with rotating platters and Solid State Drives); tape; an Internet download (including http; https; ftp; or the like); a wire . The skilled person will appreciate that a feature described in relation to any one embodiment and/or aspect of the invention may be applied, mutatis mutandis, to any other embodiment and/or aspect of the invention.

There now follows, by way of example only, a detailed description of embodiments of the invention, in which: Figure 1 shows three example range maps (a, b and c) generated by an embodiment; Figure 2 shows examples of likely trajectories between the same start and destination induced by three different route preferences;

Figure 3a shows a trajectory spanning states and actions in an original road network (a);

Figure 3b shows a trajectory spanning states and actions for the trajectory of Figure 3a but in a transposed network;

Figure 4 shows the forces acting on a car;

Figure 5 shows velocity, acceleration and power demand profiles for three adjacent navigable segments respectively containing a speed limit, a stop sign and a slight elevation; Figure 6 shows how the mean predictive error changes as the number of training trajectories increases for a variety of models; and

Figure 7 shows a flow chart outlining a first embodiment of the invention; Figure 8 shows a flow chart outlining a second embodiment of the invention; and

Figures 9A and 9B show examples of range maps generated by an embodiment in which probabilistic attainability is calculated. Embodiments are based upon map data which represents a map showing an area through which a vehicle using an embodiment can move. In particular the map is considered to comprise a network of navigable paths composed of individual navigable segments (ie segments - s which are joined at intersections. Each segment represents a section of a navigable path for a vehicle traversing the area covered by the map. Typically, the segments will represent portions of a road, and for example may exist between adjacent intersections on a given road. However, a segment might also represent a portion of a track, a path, rail, canal, tow-path, cycle path, or the like. Typically, embodiments, including the one being described, will limit the segments that are considered to those that are passable by the vehicle for which the method is being performed. As such, a method applied for a car will ordinarily consider road segments but would unlikely consider farm tracks, unpaved roads and the like.

Such map data representing the network of navigable paths together with associated route infrastructure (e.g. number and location of traffic lights, stop signs, etc.) can be readily obtained from, for example, community projects such as OpenStreetMap (www.openstreetmap.org). Alternatively, or additionally, proprietary data such as obtainable from providers such as Tele-atlas, Navteq, or the like may be used to provide the map data.

Reference is made, in the following, to calculating the range of a vehicle. It will be appreciated that determining a range along one or more routes to a location determines whether or not the location is attainable. As such, the two terms are inter-related. Attainability may be determined by comparing an expectation of the amount of energy required to get to a place with the estimated state-of-charge of the battery of the electric vehicle. The expectation of the required amount of energy will depend upon range/distance, and may also depend upon environmental and driver preference factors, among other variables. In at least some embodiments, attainability may be thought of as a probability of reaching a specified place. Attainability may be calculated as an average over several routes, weighted by route preferences.

The skilled person would understand that saying that a vehicle has a range of 20 miles, for example, may not be sufficiently accurate. A 20 mile journey downhill, on a smooth road, with few stops and starts, takes much less energy than a 20 mile uphill journey in stop-start traffic, for example. A driver may therefore assume that a stated range of 20 miles would allow him or her to reach a location 19 miles away; depending on terrain and traffic, this may not be the case.

Attainability takes account of the differing energy requirements to travel the same distance, in different conditions and/or on different routes.

In order to provide any of the embodiments described herein, a processing apparatus is provided and arranged to process the various data that are described. In the embodiment being described, it is convenient that the processing apparatus is provided within the vehicle 100 such as seen at 400 in Figure 4. Typically, the processing apparatus comprises a process such as an Intel™ , i5™, i7™, or the like; an AMD™ FX, Athlon™; or the like. The processing apparatus comprises a memory arranged to hold data and allow the processor to access, and process, that data in a manner that will be familiar to the skilled person.

The memory may comprise any suitable form of memory such as RAM (whether volatile or not), a hard-drive; a solid state drive, etc. The memory may be local to the processor, accessible over a network by the processor, or a combination of local and networked. Indeed, it is conceivable that the memory is provided remote from the vehicle 100 and accessed via a wireless link (such as a GSM; UMTS; 3G; 4G; WIFI; or the like).

The vehicle 100 also comprises a location sensor arranged to determine the location of the vehicle (i.e. to obtain location data 700, 800) which is typically performed with reference to the map data. In the embodiment being described, the location sensor 402 comprises a receiver for a position networking and in particular the location sensor 402 is a GPS (Global Positioning System) receiver arranged to receive signal from the GPS system and determine the location of the vehicle 100. The GPS receiver 402 generates location data which is passed to the processing apparatus 400. In other embodiments a different location sensor may be provided. For example, a different positioning network may be used, such as any of the following: Global Navigation Satellite System (GLONASS); Galileo Positioning System; India Regional Navigational Satellite System; Chinese Compass Navigation System. In yet further embodiments, the location sensor 402 may be other than a positioning network and may for example be utilise cellular telephone information; gyroscopes within the vehicle 100; video camera systems arranged to determine the trajectory of the vehicle 100, or the like. Indeed, it conceivable that a vehicle 100 may comprise a plurality of the location sensors 402 listed here in order to take advantage of the relative merits of the different sensor technologies. Determining the range of a vehicle may be thought of as determining which destinations within the map are energetically attainable; that is which destinations are attainable with the energy stored within the vehicle. It is convenient to refer to an electric vehicle, such as an electric car. However, the skilled person will appreciate the techniques described herein may be applied, mutatis mutandis, to any other type of fuelled vehicle; such as for example petrol (gas); LPG (Liquid Petroleum Gas); Diesel; Bio-fuel; hydrogen; or the like. However, regardless of the fuel type of the vehicle 100, the vehicle comprises further energy sensors arranged to determine energy data providing an estimate of the amount of energy remaining available to the vehicle. In the case of an electric vehicle 100 the energy sensors may comprise at least the following: a current sensor, battery voltage sensor and/or temperature sensor. The energy remaining within the battery relates to the state of charge of the battery. Current state of charge may be calculated from the output of the energy sensors.

In at least some embodiments wherein probabilistic attainability is calculated, uncertainty in the estimation of the state of charge of the battery is accounted for by a noise term. The origin of the noise or uncertainty is not important if the magnitude of the noise or uncertainty is known. For example the vehicle may report 80%, and a +/- 5% uncertainty margin may be provided for use in the probability calculations. In the case of other fuel types, different energy sensors may be used which may give an indication of the remaining volume of fuel (and therefore energy) on-board the vehicle 100. Such energy sensors may for example monitor the volume of fuel (eg petrol (ie gas), diesel, hydrogen, LPG, or the like) remaining. At least some of the embodiments are arranged to present a driver with an attainability map (which may be thought also as a range map) as shown in Figures 1(a), 1(b) and 1(c). Here the vehicle 100 is shown at position Si on a map to which the underlying map data relates. In the embodiment being described, an envelope is displayed on the map to indicate how far the vehicle 100 can travel. In Figure 1(a) - 1(c) this envelope is shown as a shaded area on the map where segments within the shaded envelope are calculated to be reachable by the vehicle from the position Si. However, the skilled person will appreciate that in other embodiments, other display formats might be used. For example segments that were reachable might be coloured or otherwise represented in a different manner to the segments that were not attainable. As described below, at least some of the embodiments being described learn driver behaviour and tailor the range prediction according to that learned behaviour. Such learned behaviour is typically stored as a learned characteristic which is derived from data as a driver drives the vehicle to perform a journey. Typically, the learned characteristic is stored within the memory of the processing apparatus and associated with an identified driver, which may be thought of as a given driver. As such, the vehicle 100 may have a plurality of driver profiles which specific drivers can select when using the vehicle 100. It will be appreciated that the learned characteristics are generated by the behaviour of a given driver and then used, in some embodiments, to generate a range prediction of the vehicle 100. As such, if a learned characteristic from a first driver is used to generate a range prediction for a second driver then it may well be that the range prediction / attainability prediction is inaccurate. The learned characteristics may be derived from data including any one or more of the following: data providing the acceleration of the vehicle; data providing the velocity (i.e. speed) of the vehicle; data recording the selection of segments along which a driver chooses to drive the vehicle; the temperature; auxiliary systems (such as heating on/off, AC on/off, radio on/off, headlights on/off, or the like), road infrastructure including map information for individual segments (such as topology, elevation, number of lanes, existence of traffic lights, speed limit, or the like). Thus, it will be seen that the data from which the learned characteristics are generated may be thought of as giving a driver preference as to how the vehicle is driven.

Here the selection of segments along which a driver chooses to drive the vehicle may, in at least some embodiments, include an indication of a drivers preferred route selection. For example, it may indicate that a driver does not use freeways/motorways; prefers freeways/motorways, prefers roads with more bends, and the like.

In embodiments in which probabilistic range maps are generated, energy consumption (voltage and current drawn) of the engine and the auxiliary systems are recorded. This enables computation of the actual energy demand for a particular segment.

Driver behaviour and/or preference can have a significant effect on the range of the vehicle 100. In the examples of Figures 1(a), 1(b) and 1(c) the only difference between the situation of the three maps is that the drivers in each of the examples exhibit different preferences. As such, the shaded envelope 102a, 102b, 102c is quite different in each of the examples.

In addition, it will be appreciated that the energy expended along a route can vary significantly according to the route chosen. Indeed, it is believed that the energy usage to move between two points (e.g. Sjand Sg) can vary by up to substantially 10% according to the route chosen. In the remainder of this description the following nomenclature is used: the destination (Sg) being attainable if the expected energy required to travel there E(s s , s g ) from the current location (s s ) for driver specific driving preferences Θ is less than the current state of charge of the battery, E soc .

Thus, embodiments aim to estimate the expected energy, E 9 (s s , Sg) to every destination, s g within the map. In the embodiment being described, it is assumed that travel is restricted to the available road network, i.e. that a driver will restrict driving to the segments of the map and not drive off- road. Thus, the map topology gives rise to a set of all available trajectories T Sjg = (ζι ζ η ), where each ζ ί denotes a specific trajectory from s s to s g . These trajectories can be substantially different and each one has associated with it a particular energy cost Ε(ζι).

It is convenient to expresses a driver's preferences over the set of trajectories as a condition of the likelihood of a particular trajectory being traversed, ρβ(ζ ί ), on the set of driver-specific parameters. Such a formulation computes, for every possible destination s goa i in the map, the expected energy consumed in getting there from the current location s s as

Ee (s a . s g ) = P p e (¾ )£(s,) . (I)

While intuitive, this approach to range map computation suffers from two drawbacks. Firstly, the set T s g can consist of exponentially many trajectories, which renders a direct computation of E e (s s ,s g ) difficult in a real-time context. Secondly, this problem is compounded by the need to compute E e (s s ,Sg) for every possible destination s g in the map.

Thus, the embodiment being described presents a technique to mitigate these technical problems and uses a model where generalised route preferences of the driver are encoded as a particular policy in a Markov Decision Process (MDP). The embodiment being described arranges this policy to optimise an a priori unknown, driver-specific reward structure, which can be learned and, over time, refined from trajectory data typically, using Inverse Reinforcement Learning (IRL). Computation of Equ. 1 for a single destination s g is framed as an evaluation of this policy in a related MDP, which has its reward structure replaced with one expressing the energy demand of every action based on a canonical model of energy consumption.

Thus, in more detail a Markov Decision Process (MDP), {S, A, P, R) is defined as follows: the set of states composed of all segments in a map S = {si, s 2 , , s N }; the set of available actions composed of all possible turns at the end of a segment A = {ai, a 2 , a M } ; a deterministic transition model P; and a reward structure R with one reward associated with each state-action pair.

In this case the rewards express the energy cost of choosing a particular action in a particular state, E(si, a j ). Further, the embodiment assumes that the transition model is deterministic such that a particular state-action pair leads to a particular next state with certainty, p(s j | Si, a k ) = 1. Any trajectory in the road network can then be expressed as a sequence of state-action pairs ζ = { {so, a 0 }, {si, ai},...,{s n- i a n- i}, {s n , a n } }where the final state, s n , is an absorbing state where no further reward is accrued independent of actions taken. In this model every trajectory has an associated total energy cost given by the sum of the individual costs

The embodiment assumes driver route preferences when driving to s g to result in a stochastic policy π (i.e. learned characteristic) a defining a probability distribution over possible actions p % (a, | Si) at the end of every segment. The probability of the driver taking a particular trajectory can be expressed as the probability of observing the corresponding state-action sequence, such that:

Thus, at least some embodiments are arranged to use the map data to generate a model of the choices of a driver as a plurality of state-action pairs, with at least one pair being associated with a node of a network representing the navigable segments selectable by a driver, and wherein the probability of a driver taking a particular trajectory can be expressed as the probability of observing the corresponding state-action sequence through the network.

Computing E e (s s , S g ) as per Equation 1 is now equivalent to evaluating the value of policy π at state s s . Several efficient methods for policy evaluation exist and embodiments may use any one of these methods. However, the embodiment being described employs a method where V ¾ (s s ) can be found as a solution of a system of linear-equations as described in Sutton, R and Barto, A (1998, Reinforcement learning: An introduction, volume 1. Cambridge University Press). This teaching is hereby incorporated by reference and the skilled person is directed to read this teaching:

Moreover, embodiments employing this method have an advantage in that they produce value V K (Si) and hence E e (si, s g ) for all starting states Si at once. However, computation of a range map requires the opposite (E e (s s ,Si) for all destinations s ; an expected energy to reach every destination. It is hereinafter described how the embodiment being described computes E e (s s ,Si) at no extra processing cost.

However, before it is described how to compute E e (s s , s it is described how the driver specific policy π is learnt.

The policy to be evaluated implicitly induces the driver-specific probability distribution over possible trajectories, ρβ(ζ ί ) considered in Equ. 1. This becomes apparent when contrasting, for example, the trajectories taken by drivers who minimise travel time or distance with the more complex preferences often exhibited in reality. This can be seen with reference to Figure 2 in which Figure 2(a) shows a route that might be chosen be a driver choosing minimum travel time which typically therefore tends to favour motorways and other major routes; Figure 2(b) which shows a routes chosen by a driver choosing minimum distance travelled; and Figure 2(c) which shows a route chosen according to a drivers more idiosyncratic preferences. It will be seen that each of Figures 2(a), 2(b) and 2(c) show a route between the same start s s and end s g points.

For an individual driver, both ρ θ (ζι) and p ¾ (a j | s can be derived given a set of trajectories traversed. This work is described further in Ziebart B.D.; Mass, A L; Bagnell, J A; Dey A. K.; 2008 "Maximum entropy inverse reinforcement learning; In Proceedings of the AAAI Conference on Artificial Intelligence; P1433-1438. This reference is hereby incorporated by reference and the skilled person is directed to read this paper. As in the examples above the embodiment being described implicitly assumes a driver to optimise an a priori unknown cost function which leads to a particular trajectory to a given destination. In particular, the embodiment being described employs a feature-based IRL formulation and expresses this cost as a driver-specific reward structure in an MDP defined over the road network. More specifically, this MDP is identical to the one described in the above apart from the reward structure, which is unknown and needs to be recovered. The reward for a given state-action pair, Re(si, a j ), is assumed to be a weighted linear combination of features, f Si, a j , such that

R e (8 iy a j } = & T f Xi ) K i i (6) where Θ denotes the weight vector, which may be driver-specific and/or learned. The features express various properties encountered when transitioning from one segment to another, such as the segment length, the time required to traverse it, the road class (e.g. motorway, dual carriage way, residential, etc.) and number of lanes, angle of turn as well as the number of full stops required due to, for example, the presence of traffic lights or stop signs. The overall reward for a given trajectory, Re(Q, is computed as the sum of the rewards of the state-action pairs encountered along it,

Ε θ ί ) = (7) For given weights Θ the probability of a driver taking trajectory ζ ί is considered to be proportional to a function exponential in its total reward,

This preference model gives rise to an equivalent stochastic policy in the MDP specifying the probability of taking action a j in state Si, which is proportional to the sum of probabilities of trajectories starting with the given action, such that

y

Maximum Entropy Inverse Reinforcement Learning can then be used to find the weight vector Θ for a given driver based on observed trajectories. This, as well as the computation of the driver specific policy can be carried out efficiently using a forward-backward algorithm, such as the one described in Ziebart B.D.; Mass, A L; Bagnell, J A; Dey A. K.; 2008 "Maximum entropy inverse reinforcement learning; In Proceedings of the AAAI Conference on Artificial Intelligence; P1433- 1438.

Embodiments using such a model are advantageous because they do not suffer from the label-bias problem as described in Lafferty, McCallum, and Pereira; 2001; "Conditional Random Fields: Probabilistic models for segmenting and labelling sequence data; In the Proceedings of the ICML; P336-343. As such, embodiments can therefore be computationally more efficient when evaluating this policy as discussed below.

As described above, policy evaluation approaches can be used to compute the expected energy requirement E 9 (s s ,s g ) in reaching a destination s g from the current location s s . The generation of a range map, however, in principle requires the application of such a policy evaluation step for every possible destination. Thus, the embodiment being described uses a computational shortcut, which allows for the simultaneous evaluation of E e (s s , Si) for every possible goal Si in a map and as such, embodiments employing this shortcut are more computationally efficient.

In particular, instead of the original MDPs used to model trajectory preferences and energy consumption the embodiment being described uses related MDPs consisting of the same set of states, actions and rewards but with a transition model P' which encodes the transpose of the original road network (step 702).

Effectively, therefore, the outcome of every action is reversed, such that p y [s k ^ a^ = p(s t s k , a^ . Note that every trajectory in the original MDPs is also feasible in these new MDPs but that it is traversed in reverse. The effect of the reversal of the MDP is illustrated in Figures 3(a) and 3(b). Figure 3(a) shows a road network as it processed by the original MDP. It can be seen that the road network consists of 6 states: Si through s 6 . A path is shown on the network passing from state Si via s 2 via s 5 to s 6 . Figure 3(b) shows the same road network but with the path therethrough reversed; i.e. the path moves from S 6 via S 5 via S 2 to Si . It should be noted that as the reward structure, between the two MDPs, remains the same, Equ. 8 implies that the probability distribution over trajectories T s g from s s to s g in the original MDPs is the same as the probability distribution over trajectories T' &s from s g to s s in the new MDPs. It follows, therefore, that E e (s s , Sg) in the original road network and Ε' θ (s g , s s ) in the transposed version are equivalent,

As a result, E e (s s , Si) can be computed for every Si at once by first constructing MDPs based on the transposed road network and then solving for the expected energy consumption Ε' θ (Si, s a ) from every starting state Si to the destination s s as described previously.

The algorithm is summarised below. Computation of E e (s s , Si) consists of three steps. First, the embodiment being described transposes the road network. Next the embodiment being described computes a policy to reach state Si based on a particular driver's route preferences in the transposed route network using the backward pass described by in Ziebart B.D.; Mass, A L; Bagnell, J A; Dey A. K.; 2008 "Maximum entropy inverse reinforcement learning; In Proceedings of the AAAI Conference on Artificial Intelligence; P1433-1438. The value of this policy for every state is then computed by solving a system of linear equations (step 706). The final range map is obtained by thresholding the energy consumption predictions against the current state of charge of the vehicle battery, E soc .

This method is summarised in the following table, and also within Fig

Algorithm J Efficient aage Map Computation.

TnMispost; tint road: network

L Set p'{$k \ · · ; } = { § i I &k

ompute driver policy π I» reach state s

2. Set Z„ = 1

Recursively com ute for N iterations:

5,

6.

Solve system sf linear equations thr M (si,

Comput resulting d.iiv»l>.iiity ma

, . „ , f fcrue if ¾( s si < E m

(ialse otherwise

In one embodiment, the energy consumed over the course of a trajectory, Ε(ζ), is modelled as the sum of energy costs of state-action pairs along the trajectory and given by Equ. 2. This cost incorporates events specific to the transition, such as potentially stopping at an intersection or changing velocity according to the law of the land, as well as the energy required while traversing segment s t+ i itself. To estimate these values we combine a canonical model of an expected velocity profile as well as a physical model of the resulting energy demand of the vehicle.

The embodiment being described models a driver's chosen velocity, v, and acceleration, v , as a segment is traversed by adapting the Intelligent Driver Model (IDM) described in Treiber M, Hennecke A and Helbing D; 2000; Congested Traffic States in Empirical Observations and Microscopic Simulations; Physical Review E62(2) P 1805-1824. This document is hereby incorporated by reference and the skilled person is directed to read this document. The IDM was originally applied in the context of a car-following scenario. The embodiment being described however, employs the model by assuming the end of the segment to be a car moving at velocity v t equivalent to the desired velocity at the end of the segment. Given the road speed limit vo and distance to the end of the segment s, the acceleration of the driver is then given by the ordinary differential equation:

Values of speed limits, positions of stop signs and traffic lights can be extracted from sources such as OpenStreet map or the proprietary mapping sources discussed elsewhere herein, δ denotes smoothness term. The values used are summarised in Table 1 below.

The energy cost of an action E(si, a j ) is obtained by integrating the vehicle's power demand P, during action a

The power demand of the vehicle is the sum of the engine power P eng and the auxiliary power demand P aux due to vehicle accessories such as lights, air conditioning, etc., such that

The engine power demand, P eng , is a function of velocity and acceleration. The embodiment being described assumes an efficient system such that

where F eng is the force produced by the engine (F eng ) required to overcome forces acting on the vehicle at a given speed. As depicted in Fig. 4, these forces decompose into components due to acceleration (F acc) , friction (F fric ,i on) , air resistance (F air) and gravitation (F g ). ί Constant Description Value

maximum acceJeraiios l .,m/ ..r 5

comfortable deceleration -A' mis 2

ό ' smooihness coefficient 4

ί m vehicle mass

rolling friction coefficient 0.015 '

ί ¾ί aerodynaiJiic drag coefficient 0,25

! Α frontal area 2.846m 2

ί Ρ sir density

auxiliary power consumption aw '

Table 1 : Physical quantifies and valises ased to model energy consumption. ¾ t «f ····· .,.- 4' i¾is fe« ·ί- ,;, :- ·ί- F<>. ( 15) where

F 1 acc = ' v is the force needed to accelerate the vehicle,

Fac t ion = c rr m g is the rolling resistance,

Fair = \c d · A j p · v 2 is the aerodynamic resistance force,

F g = m g- sin (a) is the hill-climbing force, where a is the angle of inclination of the hill.

The physical constants required to compute these quantities are inspired by those for a Nissan Leaf and are detailed in Table 1. Fig. 5 shows an example of the resulting acceleration, velocity and power demand profile of a vehicle traversing a typical segment of the road network.

The Figure shows how the velocity, acceleration and power are a function of the vehicle 500 position along three different segments: s t , s t+ i, s t +2- In the first segment s t the car 500 travels a constant speed (i.e. velocity) of above 30 km/h. It will be seen that after an initial period of acceleration 502 the velocity of the car is constant 504, there is zero acceleration 506 and consequently a zero power output from the engine. The second segment s t+ i has a 30km/h speed limit in place and as such, it can be seen that as the vehicle 500 approaches the end of the first segment s t it decelerates to 30km/h and as such, there is a period of negative acceleration 508 before the car traverses the second segment s t+ i at a constant velocity 510. At the end of the second segment s t+ i there is a stop sign and the car 500 decelerates 512 to a standstill before accelerating into the third segment s t +2 which has a slight incline. After a short period the car 500 reaches a constant velocity but because of the increased F g force must now expend Power 514 to move up the hill.

In other embodiments, the energy used by a vehicle for a particular trajectory or route (ie a collection of segments), Ε ζ is taken to be a random variable. Typically, the distribution parameters are derived from real data collected from vehicles using the segments but this need not be the case. Ei is therefore a mixture of distributions describing the energy consumed along possible trajectories, Ε ζ , with the mixing coefficients provided by the probability of a driver taking that particular trajectory As there is potentially a large number of trajectories connecting two destinations, an exact evaluation of Equ. 16 is often infeasible. Therefore, some embodiments re-approximate Ei with a normal distribution, which leads to computational efficiency and may be as specified in Sylvia Fruhwirth-Schnatter, "Finite mixture and Markov switching models", 2006.

ς

«? = ∑P(i) ((¾ - w) a + »f) <18)

Thus, the embodiment models the energy used be a vehicle to move along a trajectory as a sum over the traversed segments for that trajectory:

Assuming energy expenditure of all segments is normally distributed and independent from each other implies Ε ζ is also normally distributed with given mean and variance:

Ε ς ~ ΛΓ(μ ς , σ ς ) , < 21 >

∑ (22)

2 2 (23)

{s ,a} £

In the embodiment being described, the parameters μ 5 , a, are modelled using a feature-based linear model similar to one employed to model segment rewards,

/½ ,α -— 0 σ μ T ' f 1 s ,a (24) where f s a again denotes a feature vector associated with every state-action pair and θ μ and θ σ denote driver specific weights parameterising the energy consumption. The weights are learned by maximising the likelihood of observed energy consumption over the lifetime of the system by constraints- optimisation restricted by

Figure 6 shows the results of experiments that were conducted to test the performance of the embodiment being described, which use the energy model of Figure 5, and those experiments are now described in more detail. In particular, as range map generation crucially depends on the accuracy of the underlying estimates of energy consumption this discussion focuses on an evaluation on the relative error incurred in Ee(s sta it, s goa i)- Microsoft's GeoLife dataset is described in at least the following three documents: 1) Zheng, Y.; Li, Q.; Chen, Y.; Xie, X.; and Ma, W. 2008, "Understanding mobility based on GPS data", In Proceedings of the 10th International Conference on Ubiquitous Computing, 312-321 ; 2) Zheng, Y.; Zhang, L.; Xie, X.; and Ma, W. 2009, "Mining interesting locations and travel sequences from GPS trajectories", In Proceedings of the 18th International Conference on World Wide Web, 791— 800; 3) Zheng, Y.; Xie, X.; and Ma, W. 2010. Geolife: "A collaborative social networking service among user, location and trajectory", IEEE Data Engineering Bulletin 33(2):32-39.

This GeoLife dataset contains GPS traces collected from a number of different users. Specifically, the experiments considered 50 different users, each having on average 100 trajectories spanning the urban area of Beijing, China. The road network and infrastructure information for this region cover about 100km 2 was obtained from the OpenStreetMap project. This road network is dense and contains a multitude of possible routes between any two places. The resulting MDP contains 80,000 states and 130,000 actions. As the dataset contains GPS traces only, these were segmented into individual trajectories based on time and position information.

Next, the Hidden Markov Model-based method described in Newson, P., and Krumm, J. 2009. Hidden Markov map matching through noise and sparseness. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS), 336-343, was used to match the GPS trajectories to the traversed segments. Following this preprocessing step, for every user considered 30 trajectories were sampled randomly from the set of trajectories available. The remainder were used for testing. Across all users this resulted in total training and test set sizes of 1500 and 3500 trajectories, respectively.

For each driver individually a subset of the training set trajectories are used to learn the preference weights, Θ. The prediction accuracy of the learned model was then evaluated using all trajectories in the test set. In particular, for every {start, goal} pair observed in the test set a prediction is made of the expected energy expense incurred by that driver for that trip, Ee(s sta i t , s goa i). The expected energy requirements E e (s s ,Sg) for each route are therefore weighted based on driver preferences . This value is then compared against the energy requirement computed for the actual trajectory taken. As evaluation metric thus the relative prediction error, ε(ζ ί |θ), for trajectory ζ ί given the learned weight vector, Θ, as compared to ground-truth is employed,

Figure 6, at line 600, shows the relative prediction error for 50 users as the amount of training data is increased . This is akin to more traj ectory information becoming available over time . The Figure indicates that the prediction error decreases rapidly from that obtained using the initial model as the number of training traj ectories is increased. As information is added to the model beyond 16 training traj ectories improvements are more marginal until, at 28 training trajectories, the overall prediction error reaches ca. 6 : 5 %. In the embodiment being described the formulation Ee(s sta r t , s goa i) is computed as an expectation over the entire set of possible traj ectories from start to goal state . The benefit of this approach over considering, for example, only the most likely traj ectory given a driver' s preferences is also shown in the Figure . The traj ectory from start to goal state corresponding to the maximum-likelihood estimate (MLE) is found by computing the shortest path through the road network using the learned segment cost and this is shown at 602 in the Figure . While the trend of the MLE solution is similar to that found when computing the full expectation in that the prediction error decreases as information is added to the system, the results suggest overall a performance decrease of ca. 1 % compared to using the entire distribution. In contrast, the approach of the embodiment described above computes an expectation over the entire distribution over traj ectories .

The Figure also shows two alternative models for path cost: travel time 604 and distance 606. The Figure indicates that these commonly used estimates result in considerably higher overall prediction error confirming that drivers exhibit more complex preferences as to which route is taken.

Figures 9A and 9B show examples of confidence level maps 900, 920 in which information has been added to the map which highlights the probability (ie a level of confidence) with which a driver can reach a destination. The skilled person will appreciate that embodiments are making a prediction about the range of a vehicle and as such, it might be advantageous to provide the level of confidence in that prediction. The position of the vehicle is marked by an arrow 910. The marked regions on the map indicate the attainable range with 90% (902), 50% (904) and 10% (906) probability (ie a 90% confidence; a 50% confidence; and a 10% confidence - where the lower the number the less likely it is that the destination can be reached). The region of the map outside of these marked regions (902, 904, 906) is therefore attainable with a probability of less than 10%. In other embodiments, confidences other than 90%, 50% and 10% may be shown.

Figure 9B demonstrates that there may be more than one distinct region corresponding to a certain probability of attainability. Region 904 is split into 904a and 904b. The presence of a region such as 904a may indicate the presence of a hill or other aspect of road infrastructure which increases the energy requirement for reaching a particular location or set of locations. In such cases, a particular region 904a may have a lower probability of attainability than some locations at a greater distance from the vehicle position 910. This principle also applies to the example range maps shown in Figure 1 , and to maps generated in at least some other embodiments.

The generation of probabilistic range maps, as illustrated in Figure 9, requires additional calculations and inputs. Such embodiments may use noise terms, also known as uncertainty or error terms, in estimation of the remaining energy in the battery and of the energy requirement for reaching a destination are used to provide a confidence level. Such embodiments may therefore use the energy model in which the energy usage of the vehicle is modelled as a distribution may therefore by used and as described above with reference to equations 16 to 25. Given the current location of the vehicle, s sta rt, and the current state of charge of the vehicle battery, which is equivalent to the amount of energy remaining in the battery, E soc , the probability that a particular destination can be reached is calculated for every destination in the map. In particular, a probability pi that required energy Ei required to travel to destination Si is at most the available energy E soc is estimated . destination is considered attainable if this probability is at least p t

≥ Pt

0 otherwise (28)

Equivalently, value of pi can be expressed in terms of a transformed random variable p, = p( E,≥ 0) (29) which leads an intuitive formulation in terms of the cumulative distribution function of

The embodiment being described assumes that the original quantities involved are normally distributed with a mean μ and variance σ, (i.e. Ei ~ Ν(μ ί ,σ and E soc ~N (μ 50 ο,σ 50 ο)) it follows that E, is also normally distributed such that

., , , , - , (3 D

. _ (32) fH — fisoc— i^

. 2 2 ^ 2 (33)

Figures 9A and 9B demonstrate attainability of destinations for different levels of p t . Equ. 30 can then be expressed in terms of the cumulative distribution function of the standard normal distribution such that

Note that this formulation caters for the state of charge estimate to be a random variable with associated measurements noise, which can be considerable . Typical values for μ 50 ς and Osoc taken from prior art used.

In alternative embodiments, these parameters are also learnt as part of the model .

Embodiments compute parameters μί and Oi to allow for the generation of probabilistic attainability maps as shown in Figure 9. Consider an arbitrary function G s a factorising over state-action (s,a) space . In essence, the generation of an attainability map computes an expectation of G s a over the distribution of all possible traj ectories ς from a starting position to a particular destination

{ ί? , α }€ς·

Polynomial-time algorithm Algorithm 1 , indicated by Q( » ), given a reward structure R, computes this expectation from a starting position s sta rt to every possible destination in the map, such that

(36)

Ί — Q(s st art ? ¾3 ,a 5 Gs ,a ) > where T> denotes the set of expectations computed for all destinations. This algorithm can be employed to evaluate the expected mean energy usage, μί, by letting

G (37)

8 , a s,a

Thus, the embodiment being described extends this approach by also computing the expected variance Oj 2 . First Q( » ) is executed as above to obtain the mean energy usage for every destination. Consider now a segment Sk, which is reached by performing action a, at the end of segment Si (38)

Substituting '" (i for G s a as input to algorithm Q then allows for the efficient computation of Oj 2 as output of algorithm Q . The entire algorithm is summarised in Algorithm 2.

Algorithm 2 Probabilistic Attainability Map Computation. l»p«t: 8 Hart- position of the car

battery charge

iV, .,. segments routing preferences

f segments energy consumption

ufpot: V attainability of destination

Compute predictive isieaii

Compute redictive var iiet 1

Compute r suithig driv bi!iiy map

4. f = μ — k

7 r . _ 1 1 if ft > Pt

I 0 othei ise