Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A VEHICLE WITH A CRANE WITH OBJECT DETECTING DEVICE
Document Type and Number:
WIPO Patent Application WO/2018/169467
Kind Code:
A1
Abstract:
A vehicle (2) comprising a movable crane (4) mounted on the vehicle and movably attached to the vehicle, the crane (4) comprises at least one crane part (6) and a crane tip (8), the vehicle comprises at least one object detecting device (10) provided at said crane (4) and movable together with said crane, and configured to wirelessly capture information of an object (12), the captured information comprises at least a distance and a direction to said object defining a coordinate in a three dimensional coordinate system. A processing unit (16) is provided and being configured to perform an object scanning procedure, that comprises: to control movement of the object detecting device (10) to a predetermined starting position in relation to the object (12),by controlling movement of the crane (4), to control movement of the object detecting device (10) in relation to said object (12) according to predefined scanning movement rules such that measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane, and to simultaneously perform measurements of said object, wherein object information data is captured as a result of said measurements, and to process said object information data to determine a three dimensional (3D) representation of said object.

Inventors:
LYNGBÄCK HANS (SE)
GUSTAFSSON PER (SE)
RÖSTH MARCUS (SE)
Application Number:
PCT/SE2018/050206
Publication Date:
September 20, 2018
Filing Date:
March 05, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CARGOTEC PATENTER AB (SE)
International Classes:
B66C13/46; B66F9/065; B66F9/075; B66F9/24
Foreign References:
JP2014169184A2014-09-18
JPH11173810A1999-07-02
DE102004041938A12006-03-09
JP2014105091A2014-06-09
US20150249821A12015-09-03
US9415976B22016-08-16
US9302890B12016-04-05
US20150249821A12015-09-03
US20110187548A12011-08-04
Attorney, Agent or Firm:
BJERKÉNS PATENTBYRÅ KB (Box 5366, Stockholm, SE)
Download PDF:
Claims:
Claims

1 . A vehicle (2) comprising a movable crane (4) mounted on the vehicle and movably attached to the vehicle, the crane (4) comprises at least one crane part (6) and a crane tip (8), the vehicle further comprises:

- at least one object detecting device (10) provided at said crane (4) and movable together with said crane, and configured to wirelessly capture information of an object (12), the captured information comprises at least a distance and a direction to said object defining a coordinate in a three dimensional coordinate system, the object detecting device (10) is configured to generate an object data signal (14) comprising data representing captured information,

- a processing unit (16) configured to receive said object data signal (14), c h a r a c t e r i z e d i n that processing unit (16) is configured to perform an object scanning procedure, that comprises:

to control movement of the object detecting device (10) to a predetermined starting position in relation to the object (12), by controlling movement of the crane (4),

to control movement of the object detecting device (10) in relation to said object (12) according to predefined scanning movement rules such that measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane, and to simultaneously perform measurements of said object, wherein object information data is captured as a result of said measurements, and

to process said object information data to determine a three dimensional (3D) representation of said object.

2. The vehicle (2) according to claim 1 , wherein said predefined scanning movement rules includes to control movements such that the object detecting device is moved around said object according to a predefined

movement pattern.

3. The vehicle (2) according to any of claims 1 -2, wherein said movements of the object detecting device (10) is controlled such that a measurement distance from the object detecting device to said object (12) is less than a predetermined maximal measurement distance.

4. The vehicle (2) according to any of claims 1 -3, wherein to process said object information data to determine a three dimensional (3D) representation of said object, also includes to determine and apply information regarding the position of the object detecting device (10).

5. The vehicle (2) according to any of claims 1 -4, wherein said processing unit (16) is further configured to apply said determined 3D- representation of said object (12) during a loading procedure of said object, preferably during an automatic loading procedure.

6. The vehicle (2) according to any of claims 1 -5, wherein said processing unit (16) is further configured to determine the centre of gravity (COG) of said object (12) based upon the determined 3D-representation of said object.

7. The vehicle (2) according to any of claims 1 -6, wherein said object detecting device (10) is further configured to detect and capture information about said object from an identification (ID) tag (20) provided at said object, said ID tag is e.g. provided with a bar code or a QR code.

8. The vehicle (2) according to any of claims 1 -7, wherein said object detecting device (10) is a camera system comprising two cameras that have essentially overlapping field of views.

9. The vehicle (2) according to any of claims 1 -7, wherein said object detecting device (10) is a laser scanning device. 10. The vehicle (2) according to any of claims 1 -7, wherein said object detecting device (10) is a structured-light 3D scanner.

1 1 . The vehicle (2) according to any of claims 1 -10, wherein a display unit (22) is provided and the processing unit (16) is configured to generate and apply a presentation signal (24) to said display unit to present said 3D representation on said display unit.

12. The vehicle (2) according to claim 1 1 , wherein said display unit (22) is a pair of glasses structured to present the 3D representation such that the 3D representation is overlaid on the transparent glasses through which a user observes the object.

13. The vehicle (2) according to claim 1 1 , wherein said display unit (22) is a pair of virtual reality goggles.

14. A method in a vehicle (2) comprising a movable crane (4) mounted on the vehicle and movably attached to the vehicle, the crane (4) comprises at least one crane part (6) and a crane tip (8), the vehicle further comprises:

- at least one object detecting device (10) provided at said crane (4) and movable together with said crane, and configured to wirelessly capture information of an object (12), the captured information comprises at least a distance and a direction to said object defining a coordinate in a three dimensional coordinate system, the object detecting device (10) is configured to generate an object data signal (14) comprising data representing captured information,

- a processing unit (16) configured to receive said object data signal (14), c h a r a c t e r i z e d i n that the method comprises performing an object scanning procedure, that comprises:

controlling movement of the object detecting device to a predetermined starting position in relation to the object, by controlling movement of the crane,

controlling movement of the object detecting device in relation to said object according to predefined scanning movement rules such that measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane, and simultaneously performing measurements of said object, and capturing object information data as a result of said measurements, and

processing said object information data to determine a three dimensional (3D) representation of said object.

Description:
A vehicle with a crane with object detecting device

Technical field

The present disclosure relates to a vehicle provided with a crane and in particular a crane provided with an object detecting device to detect and to determine a three dimensional representation of an object.

Background

Working vehicles are often provided with various movable cranes, which are attached to the vehicle via a joint. These cranes comprise movable crane parts, e.g. booms, that may be extended, and that are joined together by joints such that the crane parts may be folded together at the vehicle and extended to reach a load. Various tools, e.g. buckets or forks, may be attached to the crane tip, often via a rotator.

An operator has normally visual control of the crane when performing various tasks. A crane provided with extendible booms have load limitations related to how far the boom has been extended. The operator therefore needs to be aware of load limitations when lifting loads.

Features of a load that may be important when determining load limitations for a crane are for example the physical size and shape of the load, the position and orientation of the load at the ground, and also the weight and centre of gravity of the load.

Today an operator is required to visually inspect the load before e.g. lifting it with a fork. This may sometimes be difficult from a remote location, e.g. when the load is positioned at a location which is not easily accessible, and furthermore, the operator needs sometimes inspect the load by walking around it.

This may lengthen a loading or unloading procedure. In the prior art there are various examples of using camera systems or other image capturing devices in order to support the user. In the following some prior art documents will be briefly discussed. US9415976 relates to a to a crane collision avoidance system. A load locator is provided to determine a location of a load of a crane and provide the location information to a mapping module. A tag scanner scans the site for tags, e.g. RFID tags, defining an obstacle. A mapping module combines location information, a map and the obstacle information into a user accessible information that is displayed on a GUI. The tags mark objects on the job site which should be avoided during crane operations.

US9302890 relates to a crane control system configured to intervene with crane movements to avoid a collision with an obstacle. A plurality of plans are stored in a memory for use by a control module, each of the plans representing an overhead plan view of a job site including at least one obstacle therein at a predetermined elevation or elevation range. Furthermore, a plurality of crane configurations are stored in memory for use by the control module, and a display interface configured to interface with the control module to display via a real-time visualization a selected one of the plurality of plans, a selected one of the crane configuration, and a real-time position of a crane.

US-2015/0249821 relates to a device for obtaining surrounding information for a vehicle. At an end portion of a telescopic boom of a crane, a stereo camera which measures a distance from the end portion to an object is provided, and an image- processing controller which obtains three-dimensional position information of the object based on the crane as reference from measurement data to the object by the stereo camera is provided. The three-dimensional position information of an object in a surrounding area centering the crane by the moving of the telescopic boom is obtained.

In US-201 1 /0187548 a lifting device is disclosed to achieve efficient load delivery, load monitoring, collision avoidance, and load hazard avoidance. Various sensors may be provided in a load monitor in a housing close to the load. In particular is discussed collision avoidance and various techniques of generating alarm signals. The object of the present invention is to achieve a vehicle, and also a method, that improves the loading and unloading procedures of loads, such that the procedures are faster, safer and more accurate. Summary

The above-mentioned object is achieved by the present invention according to the independent claims.

Preferred embodiments are set forth in the dependent claims.

According to a first aspect the invention comprises a vehicle comprising a movable crane mounted on the vehicle and movably attached to the vehicle, the crane comprises at least one crane part and a crane tip. The vehicle further comprises at least one object detecting device provided at said crane and movable together with said crane, and configured to wirelessly capture

information of an object, the captured information comprises at least a distance and a direction to said object defining a coordinate in a three dimensional coordinate system, the object detecting device is configured to generate an object data signal comprising data representing captured information. The vehicle also comprises a processing unit configured to receive said object data signal. The processing unit is configured to perform an object scanning procedure, that comprises:

to control movement of the object detecting device to a predetermined starting position in relation to the object, by controlling movement of the crane,

to control movement of the object detecting device in relation to said object according to predefined scanning movement rules such that measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane, and to simultaneously perform measurements of said object, wherein object information data is captured as a result of said measurements, and

to process said object information data to determine a three dimensional (3D) representation of said object.

According to one embodiment the predefined scanning movement rules includes to control movements such that the object detecting device is moved around said object according to a predefined movement pattern. This is advantageous in order to cover the entire surface of the object.

According to another embodiment the movements of the object detecting device is controlled such that a measurement distance from the object detecting device to the object is less than a predetermined maximal measurement distance. This is beneficial if an automatic scanning procedure is applied.

According to still another embodiment the processing of the object information data to determine a three dimensional (3D) representation of said object, also includes to determine and apply information regarding the position of the object detecting device. This is preferable as an accurate position of the object in relation to the vehicle and also to the environment may be determined. According to another embodiment the processing unit is further configured to apply the determined 3D-representation of said object (12) during a loading procedure of said object, preferably during an automatic loading procedure. This is advantageous as a loading procedure then easily may be adapted to the present object.

According to a further embodiment the processing unit is further configured to determine the centre of gravity (COG) of the object based upon the determined 3D-representation of said object. This feature is advantageous as the loading procedure also may take the COG of the object into account.

According to still a further embodiment the object detecting device is further configured to detect and capture information about the object from an

identification (ID) tag provided at said object, said ID tag is e.g. provided with a bar code or a QR code. This is beneficial as this gives extra information of the object which may be applied during a loading procedure. According to another embodiment the object detecting device is a camera system comprising two cameras that have essentially overlapping field of views. Using a camera system is advantageous in that it is a passive system, and that the camera system also may be applied to take images of the object and of the environment.

According to another aspect the present invention relates to a method in a vehicle of the kind described above. The method comprises performing an object scanning procedure, that comprises:

controlling movement of the object detecting device to a predetermined starting position in relation to the object, by controlling movement of the crane,

controlling movement of the object detecting device in relation to said object according to predefined scanning movement rules such that measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane, and simultaneously performing measurements of said object, and capturing object information data as a result of said measurements, and

processing said object information data to determine a three dimensional (3D) representation of said object.

Brief description of the drawings

Figure 1 is a schematic illustration of a vehicle according to the present invention. Figure 2 is a block diagram illustrating various components of the present invention.

Figure 3a-3d show various positions of a crane during information capture in accordance with the present invention.

Figure 4 is a flow diagram showing the method steps according to the present invention. Detailed description

The vehicle, and method, will now be described in detail with references to the appended figures. Throughout the figures the same, or similar, items have the same reference signs. Moreover, the items and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. The vehicle is any vehicle provided with a crane, and includes any working vehicle, forestry vehicle, transport vehicle, and loading vehicle.

With references to the schematic figures 1 and 2 it is illustrated that the vehicle 2 comprises a movable crane 4, e.g. a foldable crane, mounted on the vehicle and movably attached to the vehicle. The crane 4 is provided with a tool 9, e.g. a fork or a bucket, attached to a crane tip 8. The crane 4 comprises at least one crane part 8, e.g. at least one boom that may be one or many extendible booms, and is movable within a movement range.

The vehicle and the crane will not be disclosed in greater detail as these are conventional, and being conventionally used, e.g. with regard to the joint between the crane and the vehicle, the joints between the crane parts of the crane, and the joint between a crane tip and a tool which normally is a rotator.

The vehicle further comprises at least one object detecting device 10 provided at the crane 4 and movable together with said crane, and configured to wirelessly capture information of an object 12.

The object 12 is a general designation of any fixed or removable three- dimensional item within a working range of the crane. The object may e.g. be a load to be picked up by the crane, or may be a part of the environment and the ground around the vehicle.

In one embodiment the object detecting device 10 is a camera system comprising at least two cameras, and preferably two cameras, that have essentially overlapping field of views. This embodiment will be further discussed below. The limitations for the field of views for the object detecting device are indicated as dashed lines in figure 2. The captured information comprises at least a distance and a direction to an object defining a coordinate in a three dimensional coordinate system, the object detecting device 10 is configured to generate an object data signal 14 comprising data representing captured information.

The vehicle further comprises a processing unit 16 configured to receive the object data signal 14. The processing unit 14 may be embodied as a dedicated electronic control unit (ECU), or implemented as a part of another ECU.

The processing unit 16 is configured to perform an object scanning procedure, which e.g. is a software implemented procedure that may be stored in the processing unit. The scanning procedure comprises to control movement of the object detecting device 10 to a predetermined starting position in relation to the object 12, which e.g. could be at one side of the object which is illustrated in figure 3a. This is performed by controlling movement of the crane 4 such that the object detecting device is in its starting position.

Then the movement of the object detecting device 10 in relation to the object 12 is controlled according to predefined scanning movement rules such that

measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane. More particularly, the object detecting device should be moved such that its field of view is directed towards the object to be measured. If the object detecting device is attached at a tool which is connected to the crane tip via a rotator the movement may be achieved by applying a rotation to the tool via the rotator. Simultaneously to the movement, measurements of the object is performed. In figures 3a-3d the movement of the crane, and thus of the object detecting device, around the object 12 is illustrated. In the figures 3a-3d the object is provided with a bold line on the respective side where the measurement is performed. In the figures 3a-3d the field of view of the object detecting device 10 is schematically indicated as a circle sector.

Object information data is captured as a result of the measurements, and the data is applied to the processing unit by being included in the object data signal 14. The processing unit is then configured to process the object information data to determine a three dimensional (3D) representation of the object.

The object data information may e.g. include a point cloud where each point is a three dimensional coordinate of a point at the surface of the object.

The predefined scanning movement rules include control instructions to control the movements of the crane such that the object detecting device is moved around the object according to a predefined movement pattern. In figures 3a-3d the predefined scanning movement rules includes to control movements such the that the object detecting device is moved around the object essentially in one plane, preferably in a horizontal plane.

The scanning movement rules should include control instructions that ensure that the necessary measurements are performed in order to establish the 3D

representation of the object. The amount of measurements that need to be performed often differ in dependence of the physical shape of the object. If the object is a square cube having flat sides the measurements probably are quite straight-forward, in comparison to if the object has a more complex outer shape that requires repeated measurements in order to determine the 3D representation. The different movement patterns may include moving around and above the object, and in addition repeating the movement around the object, stopping the movement, and moving up and down at one side, etc. The movements of the object detecting device 10 is preferably controlled such that a measurement distance from the object detecting device to the object 12 is less than a predetermined maximal measurement distance, which is related to the overall size of the object and may as an example be within the range of 0.1 - 1 meter.

The scanning procedure steps may be automatically performed until enough data has been captured in order to determine the 3D representation of the object.

During the processing of the object information data to determine a 3D

representation of the object, it may also be possible to include and apply information regarding the position of the object detecting device 10. The position of the object detecting device 10 may be determined by using information regarding the position of the crane which is available e.g. via length sensors and/or angle sensors of the crane, the geometry of the crane and the mounting position of the object detecting device on the crane. The processing unit 16 is further configured to apply the determined 3D- representation of the object 12 during a loading procedure of the object, preferably during an automatic loading procedure.

The determined 3D-representation of the object 12 is stored in a storage unit in the processing unit. Over time numerous different 3D-representations of objects are stored. These may be grouped in various typical object classes, e.g.

dependent upon shape, size and other features. This stored information may be applied during the step of determining the 3D-representation such that the processing unit may recognize the scanned object among the stored 3D-objects. Thereby, the scanning procedure may be performed faster.

Furthermore, the scanned objects may also be shared to other users, e.g. to other vehicles or to a central database available by other vehicles.

In a further embodiment the processing unit 16 is configured to determine the centre of gravity (COG) of the object (12) based upon the determined 3D- representation of the object. This may be useful information when determining a loading procedure. To further improve the information available about the object, an identification (ID) tag 20 is preferably provided at the object. The ID tag may be a visual tag, e.g. provided with a bar code or QR code. The object detecting device 10 is then configured to detect and capture information about the object from the ID tag.

As mentioned above the object detecting device 10 is preferably a camera system provided with two cameras arranged at a distance from each other such that the cameras capture overlapping field of views.

In other embodiments the object detecting device 10 is a laser scanning device or a structured-light 3D scanner.

The above types of object detecting devices will now be discussed in detail, together with some other techniques that also may be used. A camera system comprises at least two cameras, preferably two cameras, sometimes called a stereo camera. This is an advantageous embodiment of the object detecting device as stereo camera systems are more and more frequently used in various vehicles.

A stereo camera is a type of camera with two lenses with a separate image sensor for each lens. This allows the camera to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images, a process known as stereo photography. Stereo cameras may be used for making 3D pictures, or for range imaging. Unlike most other approaches to depth sensing, such as structured light or time-of-flight measurements, stereo vision is a purely passive technology which also works in bright daylight.

In another embodiment the object detecting device uses the Lidar-technology. Lidar is sometimes considered an acronym of Light Detection And Ranging (sometimes Light Imaging, Detection, And Ranging), and is a surveying method that measures distance to a target by illuminating that target with a laser light. Lidar is popularly used to make high-resolution maps, with applications in geodesy, forestry, laser guidance, airborne laser swath mapping (ALSM), and laser altimetry. Lidar sometimes is called laser scanning and 3D scanning, with terrestrial, airborne, and mobile applications.

In still another embodiment a 3D scanning device is used. A 3D scanner is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance (e.g. colour). The collected data can then be used to construct digital three-dimensional models.

Many different technologies can be used to build these 3D-scanning devices; each technology comes with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present, for example, optical technologies encounter many difficulties with shiny, mirroring or

transparent objects. For example, industrial computed tomography scanning can be used to construct digital 3D models, applying non-destructive testing.

The purpose of a 3D scanner is usually to create a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours on the surface of the subject can also be determined.

3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.

In still another embodiment a so-called time-of-flight lidar scanner may be used to produce a 3D model. The lidar can aim its laser beam in a wide range: its head rotates horizontally, a mirror flips vertically. The laser beam is used to measure the distance to the first object on its path.

The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser range finder. The laser range finder finds the distance of a surface by timing the round- trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light c is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the t; 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre. The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second.

In another embodiment the object detecting device uses a structured-light 3D scanner that projects a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.

The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time. Still another applicable technique is the so-called Photonic Mixer Devices (PMD) time-of-flight technology where invisible infrared light illuminates scenes and objects to be detected, and the reflected light is sensed by a PMD-sensor. According to one embodiment a display unit 22 is provided and the processing unit 16 is configured to generate and apply a presentation signal 24 to the display unit to present the 3D representation on the display unit.

The display unit may be a display arranged e.g. at a control unit or in the vehicle. As an alternative the display unit 22 is a pair of glasses, for example of the type sold under the trademark Hololens. The pair of glasses is structured to present the 3D representation such that the 3D representation is overlaid on the transparent glasses through which a user observes the object. Various additional information may also be presented as overlaid information and preferably presented such that the additional information is presented close to an illustrated part of the object.

In still another embodiment the display unit 22 is a pair of virtual reality goggles. These types of goggles comprise two displays to be arranged in front of the operator's eyes. The present invention also comprises a method in a vehicle 2 comprising a movable crane 4 mounted on the vehicle and movably attached to the vehicle. The crane 4 comprises at least one crane part 6 and a crane tip 8.

The vehicle further comprises at least one object detecting device 10 provided at the crane 4 and movable together with the crane. The object detecting device is configured to wirelessly capture information of an object 12. The captured information comprises at least a distance and a direction to the object defining a coordinate in a three dimensional coordinate system. The object detecting device 10 is configured to generate an object data signal 14 comprising data

representing captured information. The vehicle further comprises a processing unit 16 configured to receive said object data signal 14. Various details of the vehicle is described and discussed above and it is here referred to that part of the description. The method will now be described with references to the flow diagram in figure 4. The method comprises performing an object scanning procedure, that comprises the steps of:

Controlling movement of the object detecting device to a predetermined starting position in relation to the object, by controlling movement of the crane.

Controlling movement of the object detecting device in relation to the object according to predefined scanning movement rules such that measurements are performed from directions such that essentially the entire outside surface of the object is covered, by controlling movement of the crane, and simultaneously performing measurements of the object, and capturing object information data as a result of said measurements, and by:

Processing the object information data to determine a three dimensional (3D) representation of said object. The method preferably comprises that the predefined scanning movement rules includes controlling movements such that the object detecting device is moved around the object according to a predefined movement pattern.

The method preferably comprises controlling movements of the object detecting device 10 such that a measurement distance from the object detecting device to the object 12 is less than a predetermined maximal measurement distance.

In a variation the method comprises processing the object information data to determine a three dimensional (3D) representation of the object, and also determining and applying information regarding the position of the object detecting device 10.

In still a further variation the method comprises applying the determined 3D- representation of the object 12 during a loading procedure of the object, preferably during an automatic loading procedure. In another variation the method comprises determining the centre of gravity (COG) of the object 12 based upon the determined 3D-representation of the object. The method may also comprise detecting and capturing information about the object from an identification (ID) tag provided at the object. The ID tag is e.g. provided with a bar code or a QR code.

The method may also comprise presenting the 3D representation on a display unit. The display unit may be a pair of glasses structured to present the 3D representation such that the 3D representation is overlaid on the transparent glasses through which a user observes the object.

As an alternative the display unit is a pair of virtual reality goggles. The present invention is not limited to the above-described preferred

embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the invention, which is defined by the appending claims.