Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISTRIBUTED CONTROL OVER A LOSSY AND VARIABLE DELAY COMMUNICATION LINK.
Document Type and Number:
WIPO Patent Application WO/2023/280826
Kind Code:
A1
Abstract:
Disclosed is a method, comprising the steps of: - receiving control information from an operator over a communication channel, in particular a wireless communication channel, for the teleoperated device; - receiving and/or computing a quality-related information related to the communication channel; - adapting the control information based on the quality-related information to obtain an adapted control information; - controlling the teleoperated device based on the adapted control information.

Inventors:
GANGAKHEDKAR SANDIP (GB)
NOURANI VATANI NAVID (GB)
KAVEH KOOSHA (GB)
Application Number:
PCT/EP2022/068542
Publication Date:
January 12, 2023
Filing Date:
July 05, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
IMPERIUM DRIVE (GB)
International Classes:
G08C17/00; G05D1/02; H04W4/40
Foreign References:
US20190227553A12019-07-25
EP3772226A12021-02-03
Attorney, Agent or Firm:
PAUSTIAN & PARTNER PATENTANWÄLTE MBB (DE)
Download PDF:
Claims:
Claims

1. A method for distributed control of a controlled entity, comprising the steps of:

- receiving control information from a remote controller over a communication channel, in particular a wireless communication channel, for the controlled entity;

- receiving and/or computing quality-related information related to the communication channel;

- adapting the control information based on the quality-related information to obtain adapted control information;

- controlling the controlled entity based on the adapted control information.

2. The method according to the previous claim, wherein the adaptation of the control information comprises selecting a control mode for the controlled entity from a plurality of control modes.

3. The method according to the previous claim, wherein the plurality of control modes comprises a control mode in which the controlled entity is continuously controlled by a remote controller.

4. The method according to one of the previous two claims, wherein the plurality of control modes comprises a control mode in which the controlled entity is controlled non-continuously by a remote controller and continuously controlled by a local controller.

5. The method according to one of the previous three claims, wherein the plurality of control modes comprises a control mode in which the controlled entity is controlled only by a local control loop.

6. The method according to one of the preceding claims, wherein information about a control mode at an operator side is received and/or information about a control mode at the side of the controlled device is sent to the side of the operator.

7. The method according to one of the previous claims, wherein a selection of a control mode may be done at the remote and/or local side and communicated to the other side.

8. The method according to one of the preceding claims, where the remote controller calculates the possible control space based on the link quality.

9. The method according to one of the previous claims, comprising the step of estimating and/or measuring the quality of the communication link, in particular, at least its latency and available bandwidth, in at least one direction.

10. The method according to one of the previous claims, comprising the step of estimating and/or measuring the data quality information to capture the relative amount and quality of the information that is transmitted.

11. The method according to one of the previous claims, wherein the control information comprises information from a human remote controller and from a machine remote controller, in particular from an artificial intelligence system, and wherein at least one of the information is adapted based on the quality- related information to the communication link.

12. The method according to one of the previous claims, wherein the control information is adapted by an artificial intelligence system, in particular on the remote side.

13. The method according to one of the previous claims, wherein information fed back to the remote controller from the controlled entity is amended and/or augmented, in particular based on an artificial intelligence, in particular on the local side. 14. The method according to one of the previous claims comprising the step:

- displaying controllable states of the controlled device by a user interface for controlling a teleoperated device to a human teleoperator.

15. A device for distributed control of a controlled entity, configured for:

- receiving control information from a remote controller over a communication channel, in particular a wireless communication channel, for the controlled entity;

- receiving and/or computing quality-related information related to the communication channel;

- adapting the control information based on the quality-related information to obtain adapted control information;

- controlling the controlled entity based on the adapted control information.

Description:
Distributed control over a lossy and variable delay communication link.

Technical field

The field of this disclosure relates to control of a device over a communication link that can be afflicted with delays and/or noise.

Background

In certain situations, a decision making entity (104) located in a remote environment (103) wishes to control a local entity, a so-called controlled entity (101), located in a different environment (local environment (100)). A local controller (102) and a remote controller (105) in the respective environments interface with the controlled entity and the decision making entity respectively and communicate over a lossy and/or variable delay communication link (106).

Information about the local environment can be transmitted to the remote decision making entity over the same communication link. The decision making entity can receive a delayed and/or incomplete version of this information and makes a control decision that is communicated to the remote controller based on which, a control decision is communicated to the local controller and is finally applied to the controlled entity.

In some situations the local controller does not have the capability or functionality to make all control decisions necessary to control the controlled entity.

The decision making entity can have the capability and functionality to make all control decisions, however in presence of a lossy communication channel these decisions can be based on incomplete and/or delayed information about the local environment and can arrive at the local environment with an additional delay. Further they may also be incomplete and hence may not be suitable or safe to apply directly. Therefore, improvements in these areas are desirable.

Description

An object of the embodiments disclosed in the following is to improve the control of a remote decision making entity.

This problem is solved by the disclosed embodiments, which are in particular defined by the subject matter of the independent claims. The dependent claims provide further embodiments. In the following, different aspects and embodiments of these aspects are disclosed, which provide additional features and advantages.

In the remainder, further aspects and embodiments of these aspects are disclosed.

A first aspect relates to a method, comprising the steps of:

- receiving control information from a remote decision making entity over a communication channel, in particular a wireless communication channel, for the controlled entity;

- receiving and/or computing quality-related information related to the communication channel;

- adapting the control information based on the quality-related information to obtain adapted control information;

- controlling the controlled entity based on the adapted control information.

An embodiment of the first aspect relates to a method, wherein the adaptation of the control information comprises selecting a control mode for the controlled entity from a plurality of control modes.

The control modes differ in how much the control of the controlled entity is affected by a local controller, which is in the same environment as the controlled entity, in a particular part of the teleoperated device. And how much the controlled entity is affected by a remote controller, which is separated by a transmission line, in particular a wireless communication channel, e.g. a 5G transmission channel or a WIFI-transmission channel (wired transmission lines are also possible).

An embodiment of the first aspect relates to a method, wherein the plurality of control modes comprises a control mode in which the controlled entity is continuously controlled by a remote decision making module (500).

The direct control mode can in particular almost be identical to the remote decision making module being in-situ with the controlled entity and controlling it. If the controlled entity is a vehicle, the local controller may for example expect accelerator pedal position, brake pedal position and steering angle positions from the remote controller. If the remote decision making module is a human, then for the human operator to be able to control the vehicle in this mode, the total delay (from capturing the image at the vehicle until receiving the control command from the remote operator back in the vehicle) can in particular be below 200ms and the total bandwidth of the communication link between the vehicle and the remote operator can in particular be greater than 2 Mbits/s.

An embodiment of the first aspect relates to a method, wherein the plurality of control modes comprises a control mode in which the controlled entity is controlled non-continuously by a remote operator and continuously controlled by a local controller (501).

If the controlled entity is a vehicle, this control mode can for example be a waypoint control mode. Instead of sending accelerator pedal position, brake pedal position and steering angle positions, the remote controller sends a waypoint, i.e. a location ahead of the vehicle with an associated velocity and heading. The local controller can calculate the appropriate acceleration, brake and steering values to command the vehicle to the desired location to reach it with the desired velocity and heading. This mode of operation is in particular possible with delays above 200ms and communication link bandwidths greater than 0.5 Mbits/s.

An embodiment of the first aspect relates to a method, wherein the plurality of control modes comprises a control mode in which the controlled entity is controlled only by a local control loop (502).

This control mode can be an emergency control mode. It can be intended for situations where the link connectivity is poor or absent (disconnected) and it is no longer possible to use the commands from the remote environment to control the controlled entity; i.e. >1000ms delay or a link bandwidth less than 0.5 Mbits/s. The local controller therefore carries out its own control of the controlled entity without taking any information from the remote environment into account. This control mode could, e.g., turn on the emergency indicators and apply a brake command to bring the vehicle to a complete standstill.

An embodiment of the first aspect relates to a method, wherein information about a control mode (113) at an operator side is received and/or information about a control mode at the side of the teleoperated device is sent to the side of the operator.

In particular, after a change of a control mode the other side must be informed about the control mode in operation. Therefore the controller from the side of the teleoperated device (112) can send information about a control mode to the side of the operator (105) and/or vice versa.

A second aspect relates to the remote controller, wherein the controllable states of the controlled entity are based on information about the communication channel, in particular on a communication delay (703).

The presence of large communication delays reduces the set of controllable states of the teleoperated device if it is controlled by the operator (706). An embodiment of the second aspect relates to a user interface, wherein the user interface is configured to display controllable states of the teleoperated device.

That means that states that the teleoperated device cannot reach by a control input of the operator are not displayed at all or displayed in a way that they can be distinguished from the controllable states by the operator when using the interface (702). For example, positions the teleoperated entity cannot reach by an operator generated control will not be shown on the screen. Advantageously, the operator is prevented then to control impossible states that could even damage the teleoperated device.

In general, the remote controller can also be based on artificial intelligence (Al), for example an Al-agent “in the cloud”. An Al-agent can e.g be any machine learning system or a neural network.

In some embodiments, the Al can command the local controller on its own, in particular based on the feedback that was provided from the local situation. In other embodiments, the Al works together with a human remote controller.

In one embodiment the human remote controller defines and/or starts an action in certain aspects and thereafter the Al completes the task for the remote control side. For example, the human operator starts a digging operation (e.g. by saying “dig here”) and an Al agent completes the remote control of the digging operation. The defining and/or starting of an action can be communicated by the human teleoperator to the Al teleoperator. Additionally or alternatively, the defining and/or starting of an action can be communicated by the human operator to the local environment and/or the controlled entity and the Al teleoperator takes over the completion of the action. By completion it can be meant the teleoperation of the action up to a defined end point. In another embodiment, the human operator defines a task and the Al supervises the completion/operation of the task. In an exemplary embodiment, the human operator defines and/or starts an action to keep a vehicle in a certain trajectory, e.g. by saying “stay in the lane”, after this, an Al agent can supervise if the action is performed correctly by the controlled entity. In case the action is not fulfilled properly or is estimated not to be fulfilled properly, the Al agent can warn the human teleoperator and/or request further commands by the human operator.

In one embodiment, an Al agent can take over a teleoperated control of a local device after having learned from repetitive teleoperated tasks by a human teleoperator. For example, a lorry that has repetitively driven the same path by human teleoperator can at some point be driven by an Al agent. Therefore, a human teleoperator can be signaled when the Al has learned the repetitive actions good enough from the previous actions. If this signal is provided, for example by a green light or an information in a display, the human teleoperator can switch to the Al-agent.

The Al can be configured to amend the human command and/or add additional information to it. It can potentially communicate more frequently with the controlled entity than the (human) teleoperator. For example, an Al-at-the-edge agent (e.g., on a 5G connection) can communicate with little latency with the controlled entity. This has the advantage of a fast communication link to the controlled device, in particular if sending the data to the human operator will add considerable delay (both from the network but also due to the human response times). By doing so advantageously the system can process more data and it can have access to more data. Based on the additional data the command can be optimized. In some further embodiments an Al can incorporate information from other vehicles in the vicinity, e.g. traffic information, lidar data, etc. and for example based on this information adapt the velocity/direction of the vehicle. Additionally or alternatively, the Al can select and/or describe and/or amend the information provided to the local controller. Additionally or alternatively, the Al is configured to select and/or describe and/or supplement the information received from the local environment. This can be done in particular before the information is provided to a human or non-human teleoperator, e.g. perform computer vision tasks (detect objects) or augment the information with data from other sources before sending it to the human teleoperator.

In all these embodiments either the information from the human teleoperator or from the Al based teleoperator can be adapted based on quality related information. In particular an Al can adapt the information by a human operator based on the quality related information about the communication link. Advantageously, the Al can reduce the information by the human operator before being sent to the controlled device over the communication link and therefore save bandwidth.

Short Description of the Figures

Further advantages and features result from the following embodiments, which refer to the figures. The figures describe the embodiments in principle and not to scale. The dimensions of the various features may be enlarged or reduced, in particular to facilitate an understanding of the described technology. For this purpose, it is shown, partly schematized, in:

Fig. 1 a principle set up for embodiments of the present disclosure;

Fig. 2 a problematic trajectory of a teleoperated car in presence of a noisy communication channel;

Fig. 3 a set-up of a device according to an embodiment of the present disclosure; Fig. 4 time-stamping performed by a link prediction function of an embodiment of the present disclosure;

Fig. 5 organization of different control modes with respect to bandwidth and delay according to an embodiment of the present disclosure;

Fig. 6 two cranes with overlapping working spaces that benefit by a control according to the present disclosure;

Fig. 7 different situations in which embodiments according to the present disclosure can be applied;

Fig. 8 a human system interface according to an embodiment of the present disclosure.

In the following descriptions, identical reference signs refer to identical or at least functionally or structurally similar features.

In the following description reference is made to the accompanying figures which form part of the disclosure, and which illustrate specific aspects in which the present disclosure can be understood.

In general, a disclosure of a described method also applies to a corresponding device (or apparatus) for carrying out the method or a corresponding system comprising one or more devices and vice versa. For example, if a specific method step is described, a corresponding device may include a feature to perform the described method step, even if that feature is not explicitly described or represented in the figure. On the other hand, if, for example, a specific device is described based on functional units, a corresponding method may include one or more steps to perform the described functionality, even if such steps are not explicitly described or represented in the figures. Similarly, a system can be provided with corresponding device features or with features to perform a particular method step. The features of the various exemplary aspects and embodiments described above or below may be combined unless expressly stated otherwise. Description of the Figures

The general problem can be expressed in a setting involving a vehicle on a road, being driven remotely by a remote human operator or an Al system.

The sequence of events is explained in Fig 2:

- A set of images are taken at time to from the local environment (201 ).

- These are sent to the remote operator at time t1 (202).

- The remote operator is presented with an image at time t2 (203), which corresponds to a delay of t2 - to after the image was captured in the vehicle.

- The remote operator needs some time to perceive the image and take a control action and send the action commands at time t3 (204).

- The action is received back in the vehicle at time t4 (205), which corresponds to a delay of t4 - to after the image was taken.

- In this delay of t4 - to that has passed between the picture being taken and the command being received, the vehicle has been moving with a constant velocity vO and constant steering angle Q0 (206).

- If the controller applies the action at time t4, then the vehicle will not necessarily reach the desired state of the remote driver (207). To achieve that, the received command must be adapted to compensate for the delay t4 - to (208).

The diagram in Fig. 2 shows the trajectory of the vehicle from to to t4 and highlights that to achieve the new heading (requested by the remote operator) shown with a dashed line we actually need to apply the control command shown with a dotted line.

Basic solutions can be basic control methodologies to handle damping or filtering approaches.

Flowever, the basic solutions exhibit major disadvantages. For example, to control a system with varying damping and large jitter. Furthermore, no adaptation is possible at the Remote side. The embodiments disclosed herein exhibit the following advantages. It is very difficult for a human to adapt to varying delays. If a delay is not compensated for, not only is the task more challenging and tiring (due to increased task overload) but the operator can also make mistakes. With embodiments disclosed herein, the operator doesn’t need to compensate for the delay but is supported automatically.

The embodiments disclosed herein might not need time synchronization, since we calculate the roundtrip. For the remote system to be able to calculate the delay since the picture is taken, and similarly for the local system to calculate the delay since the command was given, it is necessary for the local and remote systems to use the same time. To achieve this, a form of time synchronization is needed, e.g. using NTP to synchronize the two time clocks with each other or synchronizing the two time clocks with an external source, such as the GNSS time. However, all time synchronization methods require additional processing and still the time differences between the systems can be in the 10s of milliseconds. Another approach to calculate the delay between the picture being taken (tO) and the control command being received (t4) is to pass the local system time (400) along with the data sent (401) and receive this time together with the control command (403) and the remote processing time (402). In this way, no time synchronization is needed since the calculated delay (t4-t0) is solely based on the local time and is close to 100% accurate (the drifting of the local times can be ignored since these are relatively very small compared to the delay).

The embodiments disclosed herein don’t need to use Machine Learning/Deep Learning methods hence avoiding the issues surrounding interpretability of ML/DL models and systems.

The embodiments disclosed herein can be deployed in an edge computing context, in which a computing unit(s) in the cloud is supporting or completely controlling a semi-automated system. This is desirable, because then the type and amount of computing and data needed in the device can be reduced. The edge computer in the cloud can have access to other data and it’s not necessary to send these to the device.

Further applications include Augmented Reality (AR), gaming, factory automation, etc.

The invention consists of the following aspects:

1. A method of selecting suitable control modes (113) for a local (102) and a remote controller (105) separated by a lossy and variable delay communication link (106) based on the condition of said link.

2. A distributed control system consisting of at least one local and one remote controller that implements the aforementioned method. This is depicted in Fig.

3.

Controlled Entity (101): This is the robot, vehicle or system in the local environment (100) that is controlled by the local (102) and remote controllers (105).

Local Controller (102): Interfaces directly with the controlled entity (101) in the Local environment (100) and can make control decisions and execute control actions.

The Local controller may have one or more control modes (113). Depending on the link quality (111 ) a specific control mode may be selected; e.g. a dedicated control mode in case of the communication link breaking, a different control mode if the delay of the link is very low, another mode if the bandwidth throughput is very low. However, the Local controller may also manage all control actions in a single mode. Remote Controller (105): Interfaces with the Decision making entity (104) in the Remote environment (103) and translates these decisions into control actions, which are sent to the Local Controller (102) over the communication channel (106). The Remote controller implements at least one control mode: sending control commands to the Local controller based on the inputs from the Decision making entity.

Link Prediction Module (109): Predicts the quality of the communication link based on observation and/or measurements of communication link parameters.

Control Selection Module (112): Selects a control mode (113) for the Local (102) and/or Remote Controllers (105) based on the predicted link quality (111), a Data quality indication (110) and temporal characteristics of the received control commands from the Remote controller. The control selection module can be in either the local or remote environments commanding both controllers or there may be a control selection module in either environment commanding each controller separately.

Decision making entity (104): Is capable of making decisions based on information (115) received from the local environment (and potentially other information from other sources) and communicates these decisions to the Remote Controller.

Local Data Transceiver (114): Transmits information about the Local environment such as images, environment, location, sensor data etc. (115). Receives feedback about transmitted data, which may include Remote environment information (116).

Remote Data Transceiver (117): Receives information about the Local environment such as images, environment, location, sensor data etc. Transmits feedback about received data and may include Remote environment information.

The Control Method:

The Control Selection module (112) decides which control mode is allocated to the Local Controller (102) and the Remote Controller (103) based on the Link quality prediction and measurements (111) and the Data quality indication (110).

The Link quality information (111 ) is an estimate and/or a measurement of the quality of the communication link, in particular, at least its latency and available bandwidth, in at least one direction. This information is based on observed communication link parameters like signal quality parameters, signal strength parameters, temporal characteristics of sent/received data traffic and the amount of sent/received data traffic.

The Data quality information (110) captures the relative amount and quality of the Local environment information that is sent to the remote Decision making entity. The quality information is based on a scale with a predefined maximum reference corresponding to the highest possible amount of information that can be transmitted over an ideal communication link with no delay or packet loss.

The Local controller receives delayed control commands from the Remote controller and may adapt the control commands based on the Link quality and Data quality information. The Link quality information gives the Local controller temporal information of the remote commands, which the Local controller can use to adapt the Remote control commands to compensate for the delay and packet loss. The Data quality information on the other hand tells the Local controller something about what kind and how much data did the Remote controller / Remote Decision maker have available to make that control decision. Once again, the Local controller may adapt the received control commands based on the Data quality.

The Local controller may also continue controlling the Controlled entity without receiving commands from the Remote controller; e.g. after the communication link between the two breaks or degrades beyond a certain level. In these scenarios, the Local controller may still use the previously received control commands from the Remote controller as guidance for how to control the Control entity.

Similarly, the Remote controller is aware of the link quality and the delay and will take these into account when making control decisions. As such, both controllers are independently striving to eliminate the effects of the communication link from the control.

Structural features:

1. In case the Decision making entity involves a human, then a Human-Machine Interface (HMI) (800) that enables the human operator (806) to perceive received Local environment information (801) and provide inputs (803) to the Remote controller is available (see Embodiment).

2. A Link Prediction Module (LPM) that predicts the quality of the communication link, in particular the latency, bandwidth and other temporal characteristics of the communication link.

Functional features:

1. In case the Decision making entity involves a human, then the HMI will adapt based on the Link quality to support the human operator visually and cognitively. 2. The Link prediction module measures the delays of control and data packets exchanged between the local and remote environments, by embedding timestamps as part of the message. This enables Round-trip time calculation without relying on explicit time synchronization between the local and remote environment entities.

As shown inFIG. 4, the Link Prediction Module (109) embeds timestamps in the Message MO (401) sent to a remote environment (103) which responds with a message M1 (403) and includes the processing/calculation delay dO (402) in the Message M1 along with the original timestamp to. This enables the Link Prediction Module to compute a cycle time for Messages M0 and M1 which may be considered as an RTT.

The timestamping capability can be extended to include multiple timestamps and/or references to messages that influence the message or messages being sent. This allows the Link Prediction Module to understand the temporal dependencies between sent and received messages.

Further Details:

Human remote operation of a vehicle over a cellular connection. As an embodiment, we look at how a human operator would control a vehicle over a cellular (3G, 4G, or 5G) network and how the invention plays a role here.

We define the items at: a) Controlled Entity: The vehicle, e.g. a car. b) Local Controller: The controller is inside the vehicle and sends velocity, steering, brake and other commands to move and control the vehicle. The local controller has three modes: 1) direct control, 2) waypoint control, 3) emergency control. 1. The direct control mode is almost identical to the human operator sitting inside the vehicle and controlling it. The controller expects accelerator and brake pedal position (803), steering angle positions (804), and secondary control commands, from the remote operator. For the human operator to be able to control the vehicle in this mode, the total delay (from capturing the image inside the vehicle until receiving the control command from the remote operator in the vehicle) must be below 200ms and the total bandwidth of the communication link between the vehicle and the remote operator should be greater than 2 Mbits/s.

2. In the waypoint control mode, instead of sending accelerator pedal position, brake pedal position and steering angle positions, the remote controller sends a waypoint (701), i.e. a location ahead of the vehicle with an associated velocity and heading. The local controller calculates the appropriate acceleration, brake and steering values as well as the secondary control commands to command the vehicle to the desired location to reach it with the desired velocity and heading. This mode of operation is possible with delays between 200-1000ms and communication link bandwidths greater than 0.5 Mbits/s.

3. The emergency control mode is intended for times where the link connectivity is so poor or absent (disconnected) and it is no longer possible to use the commands from the remote operator to control the vehicle; i.e.

>1000ms delay or a link bandwidth less than 0.5 Mbits/s. The controller therefore carries out its own control of the vehicle without taking any information from the remote environment into account. This control mode could, e.g., turn on the emergency indicators and apply a brake command to bring the vehicle to a complete standstill. c) Delay Metric: Is a function of the overall end-to-end delay including local processing, transmission time and remote processing (excluding the human remote driver reaction time). Different delay functions may be applied. The delay metric may also include additional temporal parameters such as control commands' inter arrival times.

As a concrete example, a simple delay metric definition is used in this embodiment:

Delay metric d = tjocal + t_transmit + t_remote

Where tjocal, t ransmit, t_remote are the combined local processing times, transmission times and the remote processing times respectively for the forward (vehicle teleoperator) and reverse (teleoperator vehicle) links. d) Bandwidth Metric

Is a function of the forward (vehicle to teleoperator) and reverse (teleoperator to vehicle) communication link data bandwidth. The Bandwidth metric may also include packet transmission/reception statistics such as packet error rates, number of retransmissions, etc. As a concrete example, the bandwidth metric is defined below:

Bandwidth metric b = 0.9*r orward + 0.1*r_reverse

Where r orward and r_reverse are the link bandwidths for the forward and reverse links respectively. e) Mode switching

Mode Switching occurs based on delay and bandwidth metrics as shown in Fig. 5. f) Link Prediction Module:

An algorithm running inside the vehicle that estimates the quality of the wireless link between the vehicle and the remote operation station. g) Control Selection Module:

An algorithm running inside the vehicle that selects which control strategy to choose based on the output of the link prediction module. h) Local Data Transceiver:

An algorithm running inside the vehicle that uses the modem inside the vehicle to send vehicle & sensor data and receive remote operation commands. i) Decision making entity: The human operator. j) Remote Controller: The controller inside the remote operator HMI. This controller has three modes similar to the local controller: 1 ) direct control, 2) waypoint control, and 3) emergency control mode:

1. In the direct control mode, the actual pedal positions and the steering angle of the HMI are captured and sent to the vehicle. This mode is active if the network delay is <200ms.

2. In the waypoint control mode, the acceleration and steering values are translated into a position based on the network delay and projected onto the road ahead of the vehicle and sent to the vehicle. This position is also visualized to the human operator. This mode is active if the network delay is between 200-1000ms.

3. In the emergency control mode, the human operator is made aware via the HMI that he has lost connection. The controller resets the control commands and enters a mode to reestablish reliable communication with the vehicle. Once connection is established, the system will fall back into either of the other active control modes based on the network delay. k) Remote Data Transceiver:

An algorithm running in the remote operator station receiving data from the vehicle and sending commands to the vehicle.

Example: Autonomous crane operation

In this embodiment, at least two cranes operate autonomously to perform pick and place tasks. According to Fig. 6. crane A (600) has a detection region (605) and crane B (602) has a different detection region (606). The cranes have the ability to detect the objects they need to pick up and can calculate the trajectories to the placement location within these detection regions. However, in some circumstances they may not be able to detect the location of an object and may require support; regions outside their detection region. Furthermore, the cranes are not aware of the position of the other cranes, and it is possible for them to collide with each other (604).

The above introduced entities or functions can be regarded in this context as: Controlled Entity: Each of the cranes.

Local Controller: The controller is inside each of the cranes and calculates the crane trajectories. The local controller has four modes: 1) normal mode 2) need of support mode, and 3) provide assistance mode and 4) collision mode:

1. In the normal control mode the crane uses its onboard sensors to detect the objects it needs to pick up, calculates the trajectory to the placement location, detects the placement location and carries out the placement. It operates 100% autonomously. 2. When a crane cannot detect the object it needs to place or cannot detect the location to place the object it enters into the need of support control mode. In this mode it sends a request to the remote decision making entity and asks for support on how to move to perform its task. The response may be delayed, and the local controller will take the delay into account to adjust these.

3. A crane may be asked to enter into a control mode to provide assistance if another crane needs support. In this mode the remote controller commands the crane to position itself in a way so its sensor observations can be used to support the other crane.

4. The collision control mode is entered into when the cranes may collide.

Link Prediction Module: An algorithm running inside each crane that estimates the quality of the wireless link between the crane and the remote controller.

Control Selection Module: An algorithm running inside the crane that selects which control strategy to choose based on the output of the link prediction module.

Local Data Transceiver: An algorithm running inside the crane that uses the modem inside the crane to send vehicle, position, sensor data and receive remote operation commands.

Decision making entity: A computer outside the cranes, responsible to oversee the operation of each crane. The goal of the decision making entity is to ensure all objects are picked and placed correctly.

Remote Controller: The controller inside the remote environment. This controller has three modes: 1) normal control, 2) support control, and 3) emergency control mode: 1. In the normal control mode, the remote controller tells each crane what objects to pick up and where to place these.

2. The controller enters the support control mode when a crane requests this. It decides/estimates which other crane(s) could provide the necessary sensor data, commands those cranes to move into assistance mode and gives them a location to go to and a detection task (either detect an object or detect the placement location). That/those cranes move towards the requested position, start their detection, adapt their position to make the best judgement of the position of the object/placement and continuously send this estimated position back to the decision making entity. The decision making entity combines the information from all the cranes and makes a decision on the target location, which is sent back to the crane in need of support.

3. The controller enters the collision control mode if it estimates that two cranes are in a collision course, in which event it decides which crane should stop or change trajectory to avoid collision and sends these to those cranes.

Remote Data Transceiver: An algorithm running in the remote environment receiving data from the cranes and sending commands to the cranes.

In a basic version of this implementation, each crane may stop and wait for the other cranes to get into the proper position to tell it where to go. Similarly, a crane may fully stop to avoid collision and let the other crane pass before continuing on its trajectory. However, stopping and starting cranes are time and resource consuming. It is therefore preferred to keep the cranes in constant motion to keep the productivity high and the cost lower. In this case the positions received from other cranes will be inaccurate and will arrive with delay. This forces both the local and the remote controllers to take the delay into account and adapt the received values based on the delay and the motion of the cranes. Other examples and details are presented in the following.

Control mode selection:

As described above, the local controller may operate in different modes, based on the link quality and data quality information. Alternatively, it may be possible to perform the entire range of controls in a single control mode.

In the former, where different modes are available, a mechanism for selecting the control mode is necessary. The control mode selection module will choose the control mode based on the link quality and data quality information. The decision making may be based on thresholds or calculated on the fly, e.g. using a prediction algorithm.

Consider a local controller in a vehicle or mobile robot that implements two different modes:

Mode M1, Link-aware control: This mode applies corrections on the Remote controller commands as they are received, for the delay and loss introduced by the communication link.

Mode M2, Emergency stop: This mode implements an emergency braking maneuver that applies the maximum possible braking force to bring the vehicle to a stop as soon as possible.

If the measured and/or predicted link quality and the data quality is above a certain threshold, Mode M1 is selected otherwise Mode M2 is selected.

Decision Making entity:

1. The Decision Making Entity can be a human operator and an HMI that provides the human operator a sensory feedback about the local environment and input devices that enable him to convey control decisions/actions to the Remote Controller. An embodiment of the Decision Making Entity is shown in Fig. 8.

2. A second embodiment of the decision making entity is an Al system. The Al system will have access to some or all of the local data but may have access to other types of data not available in the local environment, e.g. data from other devices or other servers.

The figures below show some of the specifics of the HMI for a human operator. In Fig. 7 two objects are illustrated:

State-of-the-art: The trajectories (700) show where the vehicle would head, if the current steering and velocity are maintained. It is a visual guide for the human. The curvature of the trajectory changes with the angle of the steering. Similarly, the length of the trajectory changes with the velocity of the vehicle, indicating the stopping distance.

According to the present disclosure: The waypoint (701 ) is another visual marker for the operator. Similar to the trajectory, the position of the waypoint changes laterally with the change of the steering. Its distance away from the bottom of the screen (i.e. the vehicle) changes with the delay in the link and the speed of the vehicle 703). This change has two effects:

1. It gives a visual indication to the operator about the delay in the network. In particular if the waypoint is pushed away (704), showing that his commands will take effect later.

2. It reduces the error that the human control command will generate for the local controller, by already taking into account the delay and motion of the robot and positioning the waypoint appropriately. In addition to or instead of changing the location of the waypoint. The shape of the waypoint may also change.

The change in the shape of an ellipse is often used to indicate uncertainty. The elongated ellipse in the drawing hence indicates the uncertainty in the delay and its effect on the heading and position estimation.

Fig. 7 further takes the concept of uncertainty and applies it to the trajectory (707). The trajectory is used to show the corridor of the movement of the vehicle. However, if there is any uncertainty in the motion, e.g due to the delay in the network, the shape of the corridor will change.

If we have an estimate of the network delay, u [sec], it is possible to change the shape of the trajectory to encompass that. The solid trajectory (700) in the figure represents this base trajectory calculated based on the status of the vehicle and the estimated network delay.

However, with all estimates there is also an uncertainty, o [sec]. Applying the uncertainty to the estimate (u ± 2*o) will give us 2* more trajectories (706), which can be used to show the breadth of the corridor. In the figure, the dashed and the dotted trajectories show the worst case scenarios; i.e. min and max . This can help the remote operator (human or not) to make a better decision on what the following control command should be given the possible trajectories the vehicle could be on.

Reference Signs

100: Local environment 101 : Controlled entity 102: Local controller 103: Remote environment 104: Decision making module 105: Remote controller

106: Lossy and variable delay communication link

107: Remote control loop

108: Local control loop

109: Link prediction Entity

110: Data quality indication

111: Link quality prediction and measurement

112: Control selection module

113: Controller mode

114: Local data transceiver

115: Local environment information

116: Local environment information feedback

117: Remote data transceiver

201 : Image captured in vehicle

202: Image sent from vehicle

203: Image perceived in HMIF

204: Action carried out in HMI

205: Action received in vehicle

206: Vehicle trajectory

207: Desired trajectory

208: Applied new trajectory

400: tO

401: M0

402: dO

403: M1 500: Direct control mode 501 : Waypoint control mode 502: Emergency control mode 600: Crane A

601 : Range of operation of crane A 602: Crane B

603: Range of operation of crane B

604: Collision area

700: Trajectory

701 : Waypoint

702: Monitor

703: Displacement depending on the delay and speed 704: More delay 705: Less delay

706: Changed trajectories due to the delay 707: Uncertainty caused by the delay 800: HMI

801 : Output devices 802: Screens 803: Input devices 804: Steering wheel 805: Pedals

806: Human remote operator