Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR PROVIDING A UNITARY VISUAL ENVIRONMENT OF A PASSENGER BOARDING BRIDGE
Document Type and Number:
WIPO Patent Application WO/2020/030704
Kind Code:
A1
Abstract:
The invention relates to a method for providing a unitary visual environment of a passenger boarding bridge (10) having at least two components (2, 3, 4) that are moveable relative to one another for supporting an operator (33) with moving the passenger boarding bridge (10) to a designated position (41), wherein the passenger boarding bridge (10) comprises at least three cameras (14) pointing in different directions for recording images, a number of sensors (11a, 11b, 11c) for determining movements and / or positions, a computer (32) is provided for processing the data received from the cameras (14) and the sensors (11a, 11b, 11c) and display means (34) for displaying the unitary visual environment (10).

Inventors:
PÉREZ PÉREZ MARCOS (ES)
ÁLVAREZ CUERVO ADRIÁN (ES)
MENDIOLAGOITIA JULIANA JOSÉ (ES)
SESMA SANCHEZ FRANCISCO JAVIER (ES)
GONZALEZ MIERES ISABEL (ES)
Application Number:
PCT/EP2019/071239
Publication Date:
February 13, 2020
Filing Date:
August 07, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THYSSENKRUPP ELEV INNOVATION (ES)
THYSSENKRUPP AG (DE)
International Classes:
B64F1/305; G06T3/40
Domestic Patent References:
WO2014207285A12014-12-31
Foreign References:
ES2385883A12012-08-02
US20180047135A12018-02-15
EP1897805A12008-03-12
Attorney, Agent or Firm:
THYSSENKRUPP INTELLECTUAL PROPERTY GMBH (DE)
Download PDF:
Claims:
Claims

1. Method for providing a unitary visual environment of a passenger boarding bridge (10) having at least two components (2, 3, 4) that are moveable relative to one another for supporting an operator (33) with moving the passenger boarding bridge (10) to a designated position (41 ), wherein the passenger boarding bridge (10) comprises at least three cameras (14) pointing in different directions for recording images, a number of sensors (11 a, 1 1 b, 1 1 c) for determining movements and / or positions, a computer (32) is provided for processing the data received from the cameras (14) and the sensors (1 1 a, 1 1 b, 11 c) and display means (34) for displaying the unitary visual environment (20), the method comprises the following steps: a) determining the position of the cameras (14) of the passenger boarding bridge (10); b) recording images of the environment of the passenger boarding bridge (10) and acquiring position data with the sensors (1 1 a, 1 1 b, 11 c); c) processing the images of the cameras (14) using the position data

provided by the sensors (1 1a, 11 b, 1 1 c) for constructing a unitary visual environment (20) of the passenger boarding bridge (10); and d) displaying the unitary visual environment (20) to an operator (33) of the passenger boarding bridge (10).

2. Method according to claim 1 , characterized in that steps a) to d) are repeated at least with any movement of a camera (14), a component (2, 3, 4) of the passenger boarding bridge (10) and / or with any movement in the immediate environment of the passenger boarding bridge (10).

3. Method according to at least one of the proceeding claims, characterized in that in step a) a calibration of the positions of the cameras (14) is performed in particular in relation to the position of the components (2, 3, 4) of the passenger boarding bridge (10).

4. Method according to at least one of the proceeding claims, characterized in that in step a) the positions of the cameras (14) are determined with regard to a fixed point (8) in the environment representing the point of origin of a coordinate system (12) used for processing the method.

5. Method according to claim 4 characterized in that the fixed point (8) is positioned on the rotunda (2) of the passenger boarding bridge (10).

6. Method according to at least one of the proceeding claims, characterized in that the unitary visual environment (20) displayed to the operator (33) comprises at least two views of the passenger boarding bridge (10), at least one top view and at least one side view.

7. Method according to at least one of claims 1 to 5, characterized in that the unitary visual environment (20) displayed to the operator (33) is a 3-dimensional unitary visual environment (20) of the passenger boarding bridge (10).

8. Method according to claim 7, characterized in that the perspective of the 3- dimensional environment is the forward view from the cabin (4).

9. Method for remote manoeuvring a passenger boarding bridge with a remote control unit, characterized in that a unitary visual environment (20) of the passenger boarding bridge (10) is provided to the operator (33) according to at least one of the preceding claims, wherein the operator (33) manoeuvres the passenger boarding bridge (10) to a designated position (41 ) observing the unitary visual environment (20).

10. Apparatus for providing a unitary visual environment of a passenger boarding

bridge (10) to an operator (33), the passenger boarding bridge (10) having at least two components (2, 3, 4) that are moveable relative to one another characterized in at least three cameras (14) pointing in different directions for recording images of the environment of the passenger boarding bridge (10),

a number of sensors (11 a, 1 1 b, 1 1 c) for determining movements and / or positions of the cameras (14)

a computer (32) for processing the image data received from the cameras (14) and the position data received in particular from the sensors (1 1 a, 1 1 b, 1 1 c) for constructing a unitary visual environment (20) of the passenger boarding bridge (10) by combining the images received from the cameras (14) to at least one 2- dimensional or 3-dimensional representation,

and display means (34) for displaying the unitary visual environment (20).

1 1. Apparatus according to claim 10, characterized in that the apparatus is suitable for performing the method of at least one of claims 1 to 8 or the method of claim 9.

12. Apparatus according to claim 10 or 1 1 , characterized in that the display means is a 3D-visualization device, in particular smartglasses.

13. Apparatus according to claim 10 to 12, characterized in that the display means shows the unitary virtual environment in connection with an augmented reality application.

14. Passenger boarding bridge, comprising an apparatus of at least one of claims 10 to 13.

Description:
Method for Providing a Unitary Visual Environment of a Passenger Boarding Bridge

The present invention refers to a method for providing a unitary visual environment of a passenger boarding bridge having at least two components that are moveable relative to one another for supporting an operator with moving the passenger boarding bridge to a designated position.

The invention is applicable for the observation and remote operation for moving passenger boarding bridges to or from a designated position as for example an aeroplane docking position. Remote operations on passenger boarding bridges are growing. Currently, multiple cameras are installed in the environment of passenger boarding bridges sending images to a remote operation room. In this room the images received from the cameras are displayed side by side. The operator has to manoeuvre the passenger boarding bridge by using the information of the numerous separate images displayed side by side. When manoeuvring the passenger boarding bridge, the operator has to look for example at a camera pointing to the bumper and aeroplane to make the manoeuvre and at the same time watch other cameras to prevent collisions of components of the passenger boarding bridge with objects in the environment of the passenger boarding bridge. Accordingly, the observer has to watch several cameras simultaneously to prevent any potential collision when manoeuvring the passenger boarding bridge.

For providing consistent images of the surrounding area, visual environments are known from parking assistant systems used in cars. Four or more fixed angle cameras are installed on the car and calibrated at fix positions relative to each other to reconstruct the car environment. As the fixed angle cameras are fixedly positioned at the car, relative positions between the cameras and of the recorded images are specified and used to reconstruct a unitary visual environment from the images recorded by the cameras for displaying it to the driver. As a passenger boarding bridge comprises at least two components that are moveable relative to one another to move the cabin of the passenger boarding bridge both horizontally and vertically and also to rotate at least the cabin horizontally about its axis, known methods for providing visual environments as used with parking assistant systems of cars are not applicable for providing a unitary visual environment of a passenger boarding bridge.

Therefore, it is an object of this invention to provide an improved method for preparing environment information of a passenger boarding bridge and for displaying the information to an operator in particular to support manoeuvres with a passenger boarding bridge. In a further aspect it is an object of this invention to provide an apparatus for performing the method.

An improved method for providing a visual environment of a passenger boarding bridge and an apparatus for performing the method is achieved by the solution of the independent claims. Further developments of the invention are provided by the subject matter of the dependent claims.

Based on this, a method is proposed for providing a unitary visual environment of a passenger boarding bridge having at least two components that are moveable relative to one another for supporting an operator with moving the passenger boarding bridge to a designated position. The passenger boarding bridge comprises at least three cameras pointing in different directions for recording images and a number of sensors for determining movements and / or positions. A computer is provided processing the data received from the cameras and the sensors and display means for displaying the unitary visual environment.

The method comprises the following steps:

a) determining the position of the cameras;

b) recording images of the environment of the passenger boarding bridge and acquiring position data with the sensors;

c) processing the images of the cameras using the position data provided by the sensors for constructing a unitary visual environment of the passenger boarding bridge; and

d) displaying the unitary visual environment to an operator of the passenger boarding bridge.

The method is provided for supporting an operator with moving the passenger boarding bridge to a designated position for example at an aeroplane. With the unitary visual environment displayed, the operator sees the situation and possible obstacles in the environment of the passenger boarding bridge at a glance and is enabled to manoeuvre the passenger boarding bridge in a safe way to a designated position.

A passenger boarding bridge for which the method is applicable comprises at least two components that are moveable relative to one another. At one end, passenger boarding bridges usually comprise a rotunda which normally is fastened at a building with direct access from the building to the rotunda. At the rotunda, a tunnel is arranged usually horizontally and vertically rotatable with regard to the rotunda. The tunnel is usually extendable in longitudinal direction and may comprise two or more telescopically extendable sections for this purpose. At the end distanced from the rotunda, a cabin is rotatably mounted at the tunnel. For manoeuvring the cabin to a designated position in particular at an aeroplane, several of the components are moved relative to other components of the passenger boarding bridge.

According to the invention, the passenger boarding bridge comprises at least three cameras pointing in different directions for recording images of the environment of the passenger boarding bridge. For example, one of the three cameras is arranged at the bottom side of the tunnel and points to the front end of the tunnel, where the cabin is mounted. With this camera the environment below and to the lower sides of the tunnel can be captured. Two further cameras might be arranged at the cabin capturing the area in front of the opening of the cabin and the area at one or two sides of the cabin dependent on the coverage angle and the rotation of the cabin relative to the tunnel. Type of cameras could be standard mono cameras, stereo cameras or in particular for capturing a widest possible area with one camera, so called frog-eye-cameras can be used. As a matter of principle, the more cameras are arranged at the passenger boarding bridge, the more complete and accurate the environment can be captured.

The passenger boarding bridge further comprises a number of sensors for determining movements and / or positions of cameras and components of the passenger boarding bridge. For example there is at least one sensor for determining longitudinal or rotational motion of moving components or motion between components moving with regard to one another or at least one sensor for determining position changes with regard to a spatial fix point as for example a visual sensor which also may be integrated in a camera. For determining the movement of the front end of the passenger boarding bridge relative to its back end, for example at least three sensors are used to determine horizontal, vertical or rotational movements of the components involved. In an embodiment of the method using a smartglass application, an integrated function of the smartglass application can determine movements of the components, so that the function of the sensors is integrated in the smartglass.

In an embodiment of the invention, the data of at least one sensor provided for controlling the movements resulting from manoeuvres of the passenger boarding bridge is used for determining the position of the cameras in the proposed method to provide a unitary visual environment. As the cameras are usually arranged at components of the passenger boarding bridge, it is possible to determine changes in the positions of the cameras via changes in the position of the components of the passenger boarding bridges. Current passenger boarding bridges usually comprise six sensors: one sensor for determining changes in vertical direction, one sensor for determining movements in longitudinal direction, as for example extending the tunnel. One sensor is usually provided at the drive system and one sensor determines rotational movements at/ of the rotunda and two sensors determine rotational movements of the cabin.

Additional sensors may be used for instance to determine additional movements of components of the passenger boarding bridge (if provided) and / or to determine the positions of the cameras and / or the pointing of the cameras, in particular if moveable cameras are provided.

For performing the method, a computer comprising a suitable application is provided for processing the data received from the cameras and the sensors and for providing the processed data to a display means for displaying the unitary visual environment at least to an operator. The display provided is suitable for displaying the processed data in particular in the form of a unitary visual environment to the operator. Various devices can serve as a suitable display as for example at least one screen of appropriate size and type or smartglasses or head up displays.

In a first step a) the position of each camera is determined. The determined position data provides a spatial basis for combining the images received from the individual cameras.

In step b) the cameras record images of the environment of the passenger boarding bridge and the sensors acquire position data in particular of the components of the passenger boarding bridge, the cameras and the view of the cameras, respectively. The data acquired by the sensors serve to specify the area of the environment captured by each camera and forms the basis for combining the individual images to a unitary visual environment.

In step c) the images recorded by the cameras are processed by means of the computer using the position data provided by the sensors for constructing a unitary visual

environment of the passenger boarding bridge. Using the position data the computer or rather the software application running on the computer combines the image data received from the cameras to create a common unitary visual environment of the passenger boarding bridge. For this purpose, various image processing applications can be used for composing and overlapping a number of moving images recorded by the cameras to a unitary moving image showing at least an area of the environment of the passenger boarding bridge not being recordable with one single camera in particular mounted on a passenger boarding bridge.

In step d) the processed unitary visual environment is displayed in appropriate manner to assist the operator with watching and manoeuvring the passenger boarding bridge. The type of presentation is adaptable to the needs of the operator as for example displaying in particular one or two images of appropriate shape and size of the presentation. In the same manner it is also possible to display a single unitary virtual environment of the entire environment of the passenger boarding bridge to the operator, either as 2- or 3-dimensional representation.

With the proposed method a unitary visual environment of a passenger boarding bridge can be provided to an operator manoeuvring the passenger boarding bridge to a designated position. With the unitary visual environment the operator observes a consistent image where he can see all areas of interest during manoeuvring the bridge in consistent relation with each other. In the proposed method, data received from cameras and sensors are processed. However, there are also applications of the method employing 3-dimensional scanners which combine at least partly the functions of a camera and sensors. Also embodiments of the method that use devices combining two or more of the indicated functions perform of the method as proposed.

In an embodiment of the method, steps a) to d) are repeated at least with any movement of a camera, a component of the passenger boarding bridge and / or with any movement in the immediate environment of the passenger boarding bridge. In this way the unitary visual environment displayed to the operator shows an image of the current environment of the passenger boarding bridge at any moment.

In an embodiment of the method, step a) includes a calibration of the positions of the cameras in particular in relation to the position of the components of the passenger boarding bridge. A calibration of the positions of the cameras is performed in particular once when the system is installed or when a change to the system and / or a position of at least one camera has taken place to determine the positions of the cameras for instance in relation to the position of the components in particular if the cameras themselves are moveable with regard to their point of view. Thus it is possible to determine the section of the environment recorded by means of sensors on the components during the performance of the method. A calibration of the positions of the cameras in particular in relation to the position of the components of the passenger boarding bridge may further be used for preparing a spatial basis for combining the images received from the individual cameras.

In an embodiment of the method, the unitary visual environment displayed to the operator comprises at least two views of the passenger boarding bridge, at least one top view and at least one side view. In this embodiment, the images received from the cameras are combined to at least two views. In this way, the unitary visual environment is constructed and displayed in only few images so that the operator can observe the areas of interest in the environment of the passenger boarding bridge in only few images which can easily be watched simultaneously.

In an embodiment of the method, in step a) the positions of the cameras are determined with regard to a fixed point in the environment representing the point of origin of a coordinate system used for processing the method. By use of a coordinate system for processing the method, it is possible to establish a spatial relation of the position of the cameras to the fixed point. Additionally it is also possible to establish a spatial relation of the position of the components of the passenger boarding bridge to the fixed point. In this way it is possible to generate a virtual 3-dimensional model of the passenger boarding bridge and its environment. Based on a 3-dimensional model the processing of the images of the cameras is simplified and enables also processing of a 3-dimensional image as unitary visual environment. In particular, a 3-dimensional model allows to generate a 3-dimensional image of the environment of the passenger boarding bridge.

In an embodiment of the method, the fixed point is positioned on the rotunda of the passenger boarding bridge. As the rotunda usually is fixedly connected at a building or on the ground, there is a spatial fix position at the rotunda suited as fixed point of a coordinate system used to establish a spatial relation in the proposed method.

In an embodiment of the method, the unitary visual environment displayed to the operator is a 3-dimensional unitary visual environment of the passenger boarding bridge. In a 3- dimensional visual environment the operator can see at a glance all components of the passenger boarding bridge. In this way the operator is enabled to move the passenger boarding bridge and the cabin of the passenger boarding bridge, respectively, to the designated position with less danger of overlooking any potential risk of collision with obstacles in the environment. An appropriate means for displaying a 3-dimensional unitary visual environment is for example a screen surrounding the operator at least partly or smartglasses showing the unitary virtual environment for instance also in connection with an augmented reality application.

In an embodiment of the method, the perspective of the 3-dimensional environment is the forward view from the cabin. As the designated position into which the cabin and the passenger boarding bridge has to be moved usually is related to the front of the cabin, the forward view is most commonly the view of interest and the view that supports the operator best.

In a further aspect of the invention a method is proposed for remote manoeuvring a passenger boarding bridge with a remote control unit. A unitary visual environment of the passenger boarding bridge is provided to the operator who manoeuvres the passenger boarding bridge to a designated position observing the unitary visual environment. With the proposed method, the operator can observe the environment of the passenger boarding bridge at a glance, and manoeuvre the passenger boarding bridge in a safe way to an intended position. The unitary visual environment of the passenger boarding bridge is provided in particular in accordance with the method defined in the previous description and may include one or more of the characteristics of the various embodiments of the method as indicated.

In a further aspect of the invention an apparatus is proposed for providing a unitary visual environment of a passenger boarding bridge to an operator. The passenger boarding bridge comprises at least two components that are moveable relative to one another. At least three cameras are provided at the passenger boarding bridge pointing in different directions for recording images of the environment of the passenger boarding bridge and a number of sensors for determining movements and / or positions of the cameras. The apparatus comprises a computer for processing the image data received from the cameras and the position data received in particular from the sensors for constructing a unitary visual environment of the passenger boarding bridge by combining the images to at least one 2- dimensional or 3-dimensional representation. A display means is provided for displaying the unitary visual environment to an operator.

An appropriate means for displaying a 3-dimensional unitary visual environment is for example a screen surrounding the operator at least partly or smartglasses showing the unitary virtual environment. The apparatus is in particular suited for performing a method for providing a unitary visual environment of a passenger boarding bridge in particular in accordance with the previous description. The devices of the apparatus may include one or more of the characteristics of the devices described in connection with the various embodiments of the method. In an embodiment of the apparatus, in particular in connection with an augmented reality application, the display means is a 3D-visualization device. A suitable 3D-visualization device are, for example, smartglasses with integrated sensors allowing the operator to move virtually in the unitary visual environment and to change the point of view of the unitary visual environment. This embodiment enables the operator to have a more intuitive point of view at regions of the environment of the passenger boarding bridge with increased interest as for example regions having a potential risk of collisions.

In an embodiment of the apparatus, the 3D-visualization device shows the unitary virtual environment in connection with an augmented reality application. In this embodiment, the operator may receive additional information as for example movements or distances to target positions.

In a further aspect of the invention a passenger boarding bridge is proposed comprising an apparatus as precedingly described for providing a unitary visual environment of the passenger boarding bridge to an operator.

Further advantages, features and possible applications of the present invention will be described in the following in conjunction with the figures.

Shown are in:

Fig. 1 : a side view of an exemplary passenger boarding bridge as known from the prior art;

Fig. 2a: a schematic illustration of a top view of a passenger boarding bridge in a stand- by-position;

Fig. 2b: a schematic illustration of a top view of the passenger boarding bridge from Fig.

2 during manoeuvring the cabin to an aeroplane; and

Fig. 3: a flow chart of the inventive method. Fig. 1 shows a passenger boarding bridge 10 comprising a rotunda 2 which is supported on a column 1 and is fixed at an airport building 30 with direct access for passengers from the building 30 to the passenger boarding bridge 10. At the rotunda 2 a tunnel 3 is rotatably fastened which comprises at least two sections moveable to one another for extending the length of the tunnel 3 and thus the length of the passenger boarding bridge 10. At the front end of the tunnel 3 a cabin 4 is mounted which is connectable for example to an aeroplane 40 and provides access for passengers from the passenger boarding bridge 10 to the aeroplane 40. Roughly at the end of a first section of the tunnel 3 a drive system 6 is arranged supporting the passenger boarding bridge 10 and for manoeuvring the cabin 4 and the passenger boarding bridge 10 to a designated position 41 (shown in Fig. 2b). At the drive system 6 an elevation system 5 is arranged which serves for adapting the height of the cabin 4 at the designated position. With adapting the height of the cabin 4, the tunnel 3 rotates around a horizontal axis positioned at the rotunda 2. At the exemplary passenger boarding bridge, also service stairs 7 are mounted.

The exemplary passenger boarding bridge 10 as known from the prior art shown in Fig. 1 comprises also some sensors 1 1 to determine movements of the components of the passenger boarding bridge 10 for supporting remote manoeuvring the cabin 4 to a designated position. At the rotunda 2 and the cabin 4 rotation sensors 1 1a are mounted for sensing rotational movements. At the tunnel 3 at least one length sensor 1 1 b is mounted for sensing an extension of the length of the tunnel 3 and at the elevation system 5 at least one length sensor 1 1 b is mounted for sensing changes in the height of the elevation system 5 and thus changes in the height of the cabin 4. A wheel sensor 1 1 c is mounted at the drive system 6 for sensing drive movements.

At the bottom side of the tunnel a camera 14 is arranged pointing via the elevation system 5 and the drive system 6 to the front end of the tunnel 3, where the cabin 4 is mounted. This camera 14 records the environment below and to the lower sides of the tunnel 3.

At the position where the rotunda 2 is connected to the building 30, a fix point 8 is determined which serves as point of origin of a coordinate system 12 used for processing an exemplary embodiment of the proposed method. When using the coordinate system 12 for performing the method, the position of each camera 14 and / or of the components 2, 3, 4 of the passenger boarding bridge 10 is determined with regard to the fixed point 8 representing the point of origin of the coordinate system 12 used for processing the method. Fig. 2a shows a schematic illustration of a top view of a passenger boarding bridge 10 in a stand-by-position. Within the building 30 an operator 33 sits at a workstation in a remote operation room watching the environment of the passenger boarding bridge 10 on a display 34 and manoeuvring the passenger boarding bridge 10 to a designated position.

At the passenger boarding bridge 10 an apparatus for providing a unitary visual environment 20 of the passenger boarding bridge 10 is installed. The passenger boarding bridge 10 of Fig. 2a comprises a number of components 2, 3, 4 that are moveable relative to one another and has a number of sensors 1 1 a, 1 1 b, 11 c (wheel sensor 11 c is not shown in Fig. 2a) for determining movements and / or positions of the components 2, 3, 4. Six cameras 14 (one at the bottom side of the tunnel 3) are mounted to the passenger boarding bridge 10 pointing in different directions for recording images of the environment of the passenger boarding bridge 10. A computer 32 is provided in the remote operation room for processing the image data received from the cameras 14 and the position data received in particular from the sensors 1 1a, 1 1 b, 1 1 c for constructing a unitary visual environment 20 of the passenger boarding bridge 10 by combining the images received from the cameras 14 to at least one 2D- or 3D-representation. A display 34 is arranged at the workstation of the operator 33 that displays the processed unitary visual environment 20 to the operator 33.

Fig. 2b shows a schematic illustration of the top view of Fig 2a of the passenger boarding bridge 10 at a later stage of the manoeuvre of the cabin 4 to a designated position 41 at an aeroplane 40. As can be seen in Fig. 2b, the sections of the tunnel 3 are moved relative to one another (indicated by arrows) whereby the tunnel 3 extends in longitudinal direction. Also the tunnel 3 is rotated relative to the rotunda 2 and the cabin 4 is rotated relative to the tunnel 3. With moving this components relative to one another, also the cameras 14 mounted at the components 2, 3, 4 are moved to one another and thereby also the views recorded by the cameras 14 are moving. The data of the sensors 1 1a, 11 b and 1 1 c (wheel sensor 1 1 c is not shown in Fig. 2b) and the images recorded by the cameras 14 are transmitted to the computer 32, which processes a unitary visual environment 20 from the data received from the cameras 14 and the sensors 1 1a, 11 b, 1 1c that supports the operator with moving the passenger boarding bridge 10 to the designated position 41.

Fig. 3 shows a flow chart of the inventive method. The steps of the inventive method are performed as specified in the description of the invention. As is shown in Fig. 3, in a first step, the position of each camera 14 and of the moveable components 2, 3, 4 of the passenger boarding bridge 10 is determined. In step b) the cameras 14 record images of the environment of the passenger boarding bridge 10 and the sensors 1 1 a, 11 b, 11 c acquire position data in particular of the components 2, 3, 4 of the passenger boarding bridge 10, the cameras 14 and the view of the cameras, respectively. In step c) the images recorded by the cameras 14 are processed by means of a computer 32 using the position data provided by the sensors 1 1 a, 1 1 b, 1 1 c for constructing a unitary visual environment 20 of the passenger boarding bridge 10. In step d) the processed unitary visual environment 20 is displayed to the operator for supporting watching and manoeuvring the passenger boarding bridge 10.

As indicated by the arrow at the left side of the flow chart, steps a) to d) are repeated at least with any movement of a component 2, 3, 4 of the passenger boarding bridge 10 and / or with any movement in the immediate environment of the passenger boarding bridge 10.

Reference Signs

1 column

2 rotunda

3 tunnel

4 cabin

5 elevation system

6 drive system

7 service stairs

8 fixed point

10 passenger boarding bridge

1 1 a, 1 1 b, 11 c sensors

12 coordinate system

14 cameras

20 unitary visual environment

30 building

32 computer

33 operator

34 display

40 aeroplane

41 designated position