Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMMERSIVE VIDEO WITH HAPTIC FEEDBACK
Document Type and Number:
WIPO Patent Application WO/2017/102026
Kind Code:
A1
Abstract:
The present invention presents a method for generating a stable video and haptic feedback data associated with the stable video from a raw video. The method comprises determining motion information from the raw video. The method further comprises generating the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information. The method further comprises generating the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

Inventors:
ANDERSSON LARS (SE)
ZHANG GUOQIANG (SE)
ARAÚJO JOSÉ (SE)
Application Number:
PCT/EP2015/080532
Publication Date:
June 22, 2017
Filing Date:
December 18, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (PUBL) (SE)
International Classes:
G06T5/00; G02B27/01; H04N5/232
Domestic Patent References:
WO2015010244A12015-01-29
Foreign References:
US20140294305A12014-10-02
US20120306725A12012-12-06
US20110141219A12011-06-16
US20130227410A12013-08-29
US20140205260A12014-07-24
US7558405B22009-07-07
US20130227410A12013-08-29
US20140205260A12014-07-24
US7558405B22009-07-07
Other References:
RICHARD H Y SO: "A Metric to Quantify Virtual Scene Movement for the Study of Cybersickness: Definition, Implementation, and Verification", PRESENCE, CAMBRIDGE, MA, US, vol. 10, no. 2, 1 January 2001 (2001-01-01), pages 193 - 215, XP008181310, ISSN: 1054-7460
RICHARD H. Y. SO: "A Metric to Quantify Virtual Scene Movement for the Study of Cybersickness: Definition, Implementation, and Verification", PRESENCE, vol. 10, no. 2, 2001, pages 193 - 215, XP008181347, DOI: doi:10.1162/105474601750216803
Attorney, Agent or Firm:
ERICSSON (SE)
Download PDF:
Claims:
What is claimed is:

1 . A method, performed by a first electronic device (1 10, 120), for generating a stable video and haptic feedback data associated with the stable video from a raw video, the method comprising:

determining (S420) motion information from the raw video;

generating (S430) the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information; and

generating (S440) the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

2. The method according to claim 1 , further comprising:

directly or indirectly receiving (S410) sensor data sensed by at least one motion sensor (170), wherein the at least one motion sensor (170) is collocated with a camera (160) by which the raw video is captured;

wherein determining motion information from the raw video is further based on the received sensor data.

3. The method according to claim 1 or 2, further comprising:

synchronously rendering (S450) the stable video and haptic feedback based on the haptic feedback data.

4. The method according to any of claims 1 -3, wherein the first electronic device (1 10) is a receiving electronic device.

5. The method according to any of claims 1 -3, wherein the first electronic device (120) is a transmitting electronic device, the method further comprising:

sending (S24) the generated stable video and the haptic feedback data to a receiving electronic device.

6. The method according to any of Claims 2-5, wherein the compensating for at least parts of motion in the raw video comprises at least one of stabilization and smoothing.

7. The method according to Claim 3, wherein the stable video is rendered by a Head Mounted Display (HMD) (140) and the haptic feedback is rendered by a haptic feedback device (150).

8. The method according to Claim 7, wherein the haptic feedback device (150) comprises at least one of a haptic/actuated garment, an electronic bracelet, a head-mounted display, a furniture or device with haptic feedback capabilities. 9. The method according to Claim 8, wherein the haptic/actuated garment comprises at least one of haptic/actuated gloves and shoes, and the furniture or device with haptic feedback capabilities comprises at least one of a chair and a sofa. 10. The method according to any of Claims 7-9, wherein the haptic feedback device (150) provides haptic feedback to one or more body parts comprising at least one of hands, arms, buttocks, head, feet and legs.

1 1 . The method according to any of preceding claims, wherein the method is performed in a real-time manner.

12. The method according to any of preceding claims, wherein the generating of the stable video further comprises: reducing complexity of the raw video by filtering or removing at least partial area of the raw video.

13. An electronic device (1 10, 120) for generating a stable video and haptic feedback data associated with the stable video from a raw video, the electronic device (1 10, 120) adapted to:

determine motion information from the raw video;

generate the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information; and

generate the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

14. An electronic device (1 10, 120) for generating a stable video and haptic feedback data associated with the stable video from a raw video, the electronic device (1 10, 120) comprising:

a processor (506); and

a memory (508) containing instructions (510) which, when executed by the processor (506), causes the processor (506) to:

determine motion information from the raw video;

generate the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information; and

generate the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

15. A computer program which, when executed by a processor, causes the processor to perform any of the methods of Claims 2-12.

16. A computer program product comprising a computer program of Claim 15.

17. An apparatus, implemented at a first electronic device (1 10, 120), for generating a stable video and haptic feedback data associated with the stable video from a raw video, the apparatus comprising:

module for determining motion information from the raw video;

module for generating the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information; and

module for generating the haptic feedback data which is in sync with the stable video based on the at least parts of the motion information.

Description:
IMMERSIVE VIDEO WITH HAPTIC FEEDBACK

TECHNICAL FIELD

The present disclosure generally relates to the technical field of improving a video experience, and in particular, to a method and electronic device for providing a user with immersive video experience together with haptic feedback.

BACKGROUND

Remote control of vehicles, machinery and robotic systems is forecasted to be a key element of the future networked society. These systems allow for skill delivery in remote areas, such as remote surgery and remote operations in mines, but also provide new ways for human collaboration. However, such an application imposes high requirements on the network performance, such as high rates and low latency for real-time sensing and actuation over large distances.

When performing remote control of vehicles, machinery and robotic systems, video may be acquired by a 360 degree camera, and displayed to the user via a head-mounted display (HMD) as was performed in the demo. In this way, a more immersive and realistic remote control experience can be provided to the remote operator.

However, when presenting video to a user via an HMD, motion sickness (also known as Cybersickness) occurs, which is the number one issue with virtual reality experiences. Motion sickness occurs since the video motion does not match the motion experienced by the user viewing the video via the HMD. A typical way of reducing motion sickness is for the user to experience the video on a high-end simulation apparatus which moves according to the video, which has a very high cost and is of a large size. Further, a remote operator may want to feel/experience the same exact motion dynamics from the machinery that he/she is remotely operating. This may be a critical requirement for example when operating an excavator in a remote hazardous location, and the motion dynamics of the excavator indicate that a dangerous situation is about to occur. The same situation occurs for example when remotely operating a robot inside a building, since motion dynamics related to the robot interacting with the floor, walls and other objects may be very valuable to the remote operator.

SUMMARY

It is in view of the above considerations and others that the various embodiments of the present technology have been made. To be specific, aiming to at least some of the above defects, the present invention proposes a method and an electronic device for providing a user with immersive video experience together with haptic feedback.

The problem, which the present invention aims to solve, is that providing the viewer of a video via an HMD with a stable and smooth video which mitigates motion sickness, while at the same time allowing the viewer to experience the motion dynamics existent in the video and experienced by the device capturing the video.

According to a first aspect of the present invention, a method, performed by a first electronic device, for generating a stable video and haptic feedback data associated with the stable video from a raw video is provided. The method comprises determining motion information from the raw video. The method further comprises generating the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information. The method further comprises generating the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

The method further optionally comprises directly or indirectly receiving sensor data sensed by at least one motion sensor, wherein the at least one motion sensor is collocated with a camera by which the raw video is captured, wherein determining motion information from the raw video is further based on the received sensor data. The method may further comprise synchronously rendering the stable video and haptic feedback based on the haptic feedback data. According to a second aspect of the present invention, an electronic device for generating a stable video and haptic feedback data associated with the stable video from a raw video is provided. The electronic device is configured to determine motion information from the raw video. The electronic device is further configured to generate the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information. The electronic device is further configured to generate the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

According to a third aspect of the present invention, an electronic device for generating a stable video and haptic feedback data associated with the stable video from a raw video is provided. The electronic device comprises a processor and a memory containing instructions which, when executed by the processor, causes the processor to: determine motion information from the raw video, generate the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information, and generate the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information.

According to a fourth aspect of the present invention, a computer program which, when executed by a processor, causes the processor to perform any of the methods according to the first aspect of the present invention is provided.

According to a fifth aspect of the present invention, a computer program product comprising a computer program according to the fourth aspect of the present invention is provided.

According to a sixth aspect of the present invention, an apparatus, implemented at a first electronic device, for generating a stable video and haptic feedback data associated with the stable video from a raw video is provided. The apparatus may comprise: module for determining motion information from the raw video; module for generating the stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information; and module for generating the haptic feedback data which is in sync with the stable video based on the at least parts of the motion information. The apparatus may further comprise: module for directly or indirectly receiving sensor data sensed by at least one motion sensor, wherein the at least one motion sensor is collocated with a camera by which the raw video is captured, wherein determining motion information from the raw video is further based on the received sensor data. The apparatus may further comprise: module for synchronously rendering the stable video and haptic feedback based on the haptic feedback data.

An advantage with the above solutions is that the user may experience a smooth video but at the same time can feel all the rich motion dynamics as captured in the original video at a lower cost as compared with the high-end simulation apparatus mentioned in the Background section.

BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other features of the present invention will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.

Fig. 1 is a block diagram illustrating a system for providing video and haptic feedback according to an embodiment of the present invention.

Fig. 2 is a diagram showing data flow among various entities according to an embodiment of the present invention.

Fig. 3 is a diagram showing data flow among various entities according to another embodiment of the present invention. Fig. 4 is a flow chart showing a method for generating stable video and haptic feedback data associated with the stable video according to an embodiment of the present invention. Fig. 5 is a block diagram schematically showing an exemplary arrangement which may be used in a receiving electronic device and/or a transmitting electronic device according to an embodiment of the present invention.

DETAILED DESCRIPTION Hereinafter, the present disclosure is described with reference to embodiments shown in the attached drawings. However, it is to be understood that those descriptions are just provided for illustrative purpose, rather than limiting the present disclosure. Further, in the following, descriptions of known structures and techniques are omitted so as not to unnecessarily obscure the concept of the present disclosure.

The term "user" used herein may refer to not only a human user, but also a non-human user, such as a computer or any device that receives a request and respond to the request, for example.

In general, the present invention proposes a solution that compensates and removes undesirable motion dynamics present in a video, which are known to cause motion sickness and impair proper visualization, when experiencing the video via an HMD. Further, the solution transforms and reproduces the undesirable motion dynamics via haptic feedback to the user, together with the compensated video. In this way, the user experiences a smooth video but at the same time can feel all the rich motion dynamics as captured in the original video

Such a solution is highly advantageous for example in remote control operations where a remote operator operates a machine or robot and desires an immersive and detailed view of the remote location (for example, unmanned or manned surface or underwater vehicles, unmanned or manned aerial vehicles, a machine operating in an oil platform in the sea) via an HMD, but at the same time does not want to become sick from unpleasant motion dynamics experienced by the vehicle, machinery or robot he/she is controlling. In this way, one does not require high fidelity simulator apparatuses, such as moving platforms present in high end flight or driving simulators to reduce the motion sickness. This greatly reduces costs in remote control or virtual reality applications.

Fig. 1 is a block diagram illustrating a system 100 for providing video and haptic feedback to a user 180 according to an embodiment of the present invention. As shown in Fig. 1 , the system 100 comprises a receiving electronic device 110, a transmitting electronic device 120, and a network 130 connected therebetween. The user 180 located at the receiving electronic device 110 tries to remotely control, for example, a vehicle (not shown) collocated with the transmitting electronic device 120 while watching live video captured by a camera 160 (for example, a panoramic camera) attached to the transmitting electronic device 120, and feeling haptic feedback detected by one or more optional motion sensors 170 attached to the transmitting electronic device 120 and/or inferred from the captured video.

In an non-limiting example, the vehicle may be in a mine in the north of Sweden, and the remote operator (for example, the user 180) may sit in Stockholm and use a simulator apparatus (for example, the receiving electronic device 110) which could be a copy of the vehicle control panel, including the steering wheel, pedals, etc. and the video and sound is rendered to the operator via the HMD 140. The user 180 may operate the vehicle using tactile gloves (for example, the haptic feedback device 150) which have vibration motors and are able to provide vibrations in various areas of the operator's hands. Alternatively or additionally, the user 180 may have pneumatic pouches (for example, also the haptic feedback device 150) placed on his/her shoe soles which can inflate/deflate to provide haptic feedback to the operator's feet. In another example, the vehicle may be a ship sailing over the North Atlantic Ocean, and the remote operator (for example, the captain or the chief mate) may be located in Beijing and remotely control the ship via a satellite link. In yet another example, the vehicle may be a drone flying over the North Atlantic Ocean, and the remote operator may be located in New York and remotely control the drone via a cellular network.

Referring back to Fig. 1 , the camera 160 may be installed in the vehicle driver seat. When driving from point A to point B, the vehicle may experience an unleveled terrain with many holes. Due to this fact, the vehicle may experience many lateral and vertical vibrations. Since the camera 160 is attached to the vehicle, the vibrations may be experienced in the captured video. Several motion sensors (for example, the motion sensors 170) may be placed in the vehicle together with the camera, allowing the identification of the lateral and vertical vibrations. In the embodiment shown in Fig. 1 , the network 130 may be a communications network, such as Internet, Local Area Network (LAN), Wide Area Network (WAN), Code Division Multiple Access (CDMA) network, Wideband-CDMA (WCDMA) network, TD-SCDMA network, CDMA 2000 network, LTE network, LTE-A network, or any other network that is currently existing or will be developed in the future and enables the communication between the receiving electronic device 110 and the transmitting electronic device 120. However, please note that the network 130 may not be present in some other embodiments of the present disclosure since the receiving electronic device 110 may be connected to the transmitting electronic device 120 directly, for example, via a cable, a fiber, a radio channel, or a satellite link.

As shown in Fig. 1 , the transmitting electronic device 120 is functionally connected to the camera 160 for capturing images/videos of its circumstance. The transmitting electronic device 120 may be, but is not limited to, an embedded computer that is embedded into the vehicle, and the camera 160 may be a camera such as a panoramic camera that is mounted on the seat of the vehicle and connected to the embedded computer, i.e. the transmitting electronic device 120. The camera 160 may be used for capturing live raw images or videos of the environment of the vehicle, and the images/videos may be transmitted from the camera 160 to the user 180 via the transmitting electronic device 120, the network 130, and the receiving electronic device 110 in real-time. As will be described in details below, the images/videos may or may not be processed at the transmitting electronic device 120 before the transmission. Further, please note that the present disclosure is not limited to a real-time application. In other words, it may also be applied to a non-real-time application, for example, experiencing a pre-recorded video using a portable device, a flight simulation, training course, and so on. Further, the one or more optional motion sensors 170 may be mounted on the vehicle together with the camera 160 or located on different places to detect motion dynamics that the vehicle/camera 160 experiences. The one or more sensors 170 may be connected to the transmitting electronic device 120 and provide its detected sensor data to the transmitting electronic device 120. The sensors 170 may comprise at least one of accelerometer, gyro, proximity sensor, or any other sensor that can provide an indication of motion dynamics of the vehicle/camera 160. Please note that the sensors 170 are not indispensable since the motion dynamics of the vehicle/camera 160 may be inferred from the video captured by the camera 160 as will be described in details below, and therefore the sensors 170 are shown as a dashed block in Fig. 1 .

In the embodiment shown in Fig. 1 , the transmitting electronic device 120 receives the raw video data and optional sensor data from the camera 160 and the motion sensors 170, respectively. With or without processing, the transmitting electronic device 120 transmits the data to the receiving electronic device 110 via the network 130. After that, the receiving electronic device 110 processes the received data to render stable video and haptic feedback to the user 180 via the head mounted display 140 and the haptic feedback device 150, respectively. The user 180 then watches the stable video while feeling the haptic feedback synchronously. In this way, the motion sickness could be greatly reduced, and the user would feel the same motion dynamics that is experienced also by the vehicle.

Furthermore, though the receiving electronic device 110 and the transmitting electronic device 120 are named as "receiving" and "transmitting", respectively, in Fig. 1 , their functionalities are not limited thereto. As mentioned above, the user 180 is remotely controlling the vehicle based on the stable video and the haptic feedback, and therefore the user 180 may issue some controlling instructions to the vehicle via the receiving electronic device 110 and the transmitting electronic device 120. For example, the user 180 may use a remote controller (not shown) connected to the receiving electronic device 110 to operate the transmitting electronic device 120 and therefore the vehicle. In this regard, the receiving electronic device 110 may transmit control data to the transmitting electronic device 120, and the communication there between is a bi-directional communication. Some more details will be described below with reference to Fig. 2 and Fig. 3.

As mentioned above, the processing of the raw video data and the optional sensor data can be carried out at different locations, such as the receiving electronic device 110 or the transmitting electronic device 120 or any other device located on the path from the camera 160 to the user 180. In other words, an electronic device which could implement the above processing may be located at any point on the path from the camera 160 to the user 180, and such an electronic device (hereinafter "first electronic device") could be either a transmitting electronic device or a receiving electronic device. Detailed descriptions of both cases will be given below with reference to Fig. 2 and Fig. 3, respectively.

Fig. 2 is a diagram showing data flow among various entities according to an embodiment of the present disclosure. In the embodiment shown in Fig. 2, the raw video and the sensor data are processed at the receiving electronic device 110, rather than the transmitting electronic device 120. In this embodiment, the receiving electronic device 110 is the first electronic device which implements the processing of the raw video and/or the sensor data.

As shown in Fig. 2, raw video may be captured by a (panoramic) camera 160 with varying motion dynamics. In other words, if the user 180 watches the raw video directly, the raw video may cause the motion sickness of the user after a very short time (typically, below 5 minutes). At S11 , the raw video is transmitted from the camera 160 to the transmitting electronic device 120. Further, at an optional step S12, one or more optional sensors 170 placed in the same body to which the camera 160 is attached to e.g. a vehicle, machinery, robot or a person may collect the motion data of the camera 160 and transmit the same to the transmitting electronic device 120 simultaneously or sequentially.

Please note that the motion data detected by the sensors 170 may or may not be in sync with the video captured by the camera 160. If they are not in sync, some processing needs to be done. Since such a processing is not a concern of the present disclosure, it will be omitted for simplicity and can be found in, for example, US patent Pub. No. US 2013/0227410 A1 or US patent Pub. No. 2014/0205260 A1 .

Upon receiving the raw video and the optional sensor data, the transmitting electronic device 120 in this embodiment will forward these data to the receiving electronic device 110 at step S13 rather than process the data locally. For example, the data could be transmitted to the receiving electronic device 110 as a streaming video and optional sensor data.

At the receiving electronic device 110, after the raw video data and optional sensor data are received, the motion dynamics can be inferred from the raw video and/or from the optional sensor data at step S14. For example, a method for inferring motion dynamics can be found in U.S. Patent No. US 7,558,405 B2. Then, the inferred motion dynamics is processed and the properties of the motion dynamics (frequency and persistency of oscillations, etc.) are identified. With the identified properties, the stabilization and smoothing of motion dynamics that impair or reduce the proper visualization of the video (e.g. vibrations, oscillations, etc.) may be performed. After that, the undesired motion dynamics may be transformed into haptic feedback data.

The receiving electronic device 110 may transmit the stabilized video data and the haptic feedback data to the head mounted device 140 and the haptic feedback device 150, respectively, and finally the user 180 may be presented with the stable video and the haptic feedback simultaneously at S15. The video is displayed in the HMD 140 and at the same time the haptic feedback is provided to the user 180 in one or several body locations such as hands, buttocks, arms, legs, feet, etc.

In this way, the problem of motion sickness when experiencing videos via an HMD is addressed, and one does not require high fidelity simulator apparatuses, such as moving platforms present in high end flight or driving simulators to reduce the motion sickness. This greatly reduces costs in remote control or virtual reality applications. Further, the user may experience a smooth video but at the same time can feel all the rich motion dynamics as captured in the original video which is valuable for example in remote control operations where a remote operator desires an immersive and detailed view of the remote location via an HMD, but at the same time does not want to become sick from unpleasant motion dynamics experienced by the vehicle, machinery or robot he/she is controlling.

Further, besides the stabilization/smoothing, other methods could be jointly applied. As an example, it is known that a high scene complexity in a video, together with the motion, may result in higher sickness than if the scene is of low complexity as described in the paper "A Metric to Quantify Virtual Scene Movement for the Study of Cybersickness: Definition, Implementation, and Verification" by Richard H. Y. So, et al. Hence, besides video stabilization/smoothing, certain areas of the video may be filtered or removed, in order to make the video scene more homogenous.

For example, in a real-time application where not enough motion realism is being provided via haptic feedback to the user at a given time (e.g., because the motion experienced by the vehicle is too complex and the computing resource of the first electronic device is not sufficient), so one would like to have the video less stabilized, but at the same time, one would homogenize the video scene and reduce its complexity, for example, by filtering/removing unnecessary details, so that the motion sickness can be reduced. In this way, the generation of the stable video may further comprise: reducing complexity of the raw video by filtering or removing at least partial area of the raw video, for example. In the following, please refer to Fig. 3 which is a diagram showing data flow among various entities according to another embodiment of the present invention. The major difference between Fig. 2 and Fig. 3 is that the processing of the raw video and the generation of the haptic feedback data are performed at the transmitting electronic device 120 (in this embodiment, also known as the first electronic device), rather than the receiving electronic device 110 as shown in Fig. 2.

As shown in Fig. 3, raw video is captured by a (panoramic) camera 160 with varying motion dynamics. At S21 , the raw video is transmitted from the camera 160 to the transmitting electronic device 120. Further, at an optional step S22, one or more optional sensors 170 placed in the same body to which the camera 160 is attached to e.g. a vehicle, machinery, robot or a person collects the motion data of the camera 160 and transmit the same to the transmitting electronic device 120.

Upon receiving the raw video and the optional sensor data, the transmitting electronic device 120 infers the motion dynamics from the raw video and/or from the optional sensor data at step S23. Then, the inferred motion dynamics is processed and the properties of the motion dynamics (frequency and persistency of oscillations, etc.) are identified. With the identified properties, the stabilization and/or smoothing of motion dynamics that impair or reduce the proper visualization of the video (e.g. vibrations, oscillations, etc.) is performed. After that, the undesired motion dynamics is transformed into haptic feedback data. The stable video data and the haptic feedback data are transmitted from the transmitting electronic device 120 to the receiving electronic device 110 at step S24.

At the receiving electronic device 110, after the stable video data and the haptic feedback data are received, the receiving electronic device 110 may forward the stable video and the haptic feedback data to the head mounted device 140 and the haptic feedback device 150, respectively, and finally the user 180 may be presented with the stable video and the haptic feedback simultaneously. The video is displayed in the HMD 140 and the haptic feedback is provided to the user 180 in one or several body locations such as hands, buttocks, arms, legs, etc. The data processing (S14 in Fig. 2 and S23 in Fig. 3, comprising at least the stabilization/smoothing of the raw video and the generation of the haptic feedback data) could be performed by either the receiving electronic device 110 as shown in Fig. 2 or the transmitting electronic device 120 as shown in Fig. 3. That is and as mentioned above also, the first electronic device could be either the transmitting electronic device 120 or the receiving electronic device 110. However, the present disclosure is not limited thereto. For example, part of the processing may be performed by the receiving electronic device 110 and the rest may be performed by the transmitting electronic device 110. To be more specific, in an alternative embodiment, the transmitting electronic device 120 may receive the raw video and the sensor data, and it may process the sensor data to extract the haptic feedback data therefrom without processing the raw video. In other words, the receiving electronic device 110 will receive the raw video from the transmitting electronic device 120 together with the haptic feedback data generated by the transmitting electronic device 120. Such an embodiment is also feasible with some waste of network bandwidth due to the redundant motion information contained in both of the raw video and the haptic feedback data.

In another alternative embodiment, the transmitting electronic device 120 may leave the sensor data untouched but process the raw video to generate the stable video, and it is the receiving electronic device 110 that processes the sensor data to generate the haptic feedback data. In general, the stabilization of the video and the generation of the haptic feedback data can be performed separately at any node in the system 100, and such alternative embodiments should be considered as falling into the scope of the present disclosure. In other words, the functionalities of the first electronic device can be separated and distributed over several places and carried out by different entities. Further, in yet another alternative embodiment, all the operations comprising, for example, the video capturing, motion sensing, processing of the raw video and/or sensor data, providing the stable video and the haptic feedback, or the like, may be performed in a single device. In this case, one user may use this device to capture the video and motion dynamics, and then this user or another user may connect an HMD and/or a haptic feedback device to the single device later to receive an immersive video experience with feeling haptic feedback at the same time.

Next, a method performed by the first electronic device (in the embodiment shown in Fig. 4, either the receiving electronic device 110 or the transmitting electronic device 120) for generating stable video and haptic feedback data associated with the stable video will be described with reference to Fig. 4.

As shown in Fig. 4, according to one embodiment, the method begins with the step S420 where motion information is determined from raw video. In this embodiment, the raw video is received either from the transmitting electronic device 120 if the method is performed at the receiving electronic device 110 or from a camera 160 coupled to the transmitting electronic device 120 if the method is performed at the transmitting electronic device 120 or any other external source. Further, in some alternative embodiments, instead of receiving the raw video data from an external source, the raw video is generated locally, for example, by a Virtual Reality (VR) engine (such as a flight simulator, a VR game, a VR training program, etc.). Further, in an alternative embodiment, the motion information is determined further based on motion sensor data. The sensor data may be received, at an optional step S410, directly from one or more motion sensors 170 located together with the camera 160 (for example, the sensors as shown in Fig. 3) or indirectly, via another electronic device (for example, the second electronic device as shown in Fig. 2) from one or more motion sensors 170.

At the step S430, the stable video is generated by compensating for at least parts of motion in the raw video based on at least parts of the motion information. To be more specific, in an embodiment, the motion information comprises low, medium, and high frequency motion information, and the low frequency motion above 0.2 Hz which may cause the motion sickness, and the medium and high frequency which may cause fatigue and nervous irritability of the user is removed or cancelled from the raw video to generate the stable video based on the identified motion frequency information. In other words, the user can watch the video with the low frequency motion below 0.2 Hz, which will probably not cause the motion sickness and fatigue to the user.

In the embodiments, the frequencies will greatly depend on the movement which the camera experiences. It may be large amplitude movements and constant (for example, a boat in the water, always oscillating), or it may be very sporadic bounces (potholes, opening and closing a door), or it may be a continuous persistent vibration from the machine motor, for example. Therefore, it may not be practical to detail all the types of frequencies that can happen. In other words, all frequencies could be harmful, depending on how persistent they are. But if one has low persistency of the oscillations and they are very sporadic, then typically medium and high frequency would be the most harmful. In an embodiment of the present disclosure, persistent motion with frequencies between 0.2 Hz and 0.4 Hz, for which the motion sickness is the highest, may be removed. Further, the compensating for at least parts of motion in the raw video comprises at least one of stabilization and smoothing.

At the step S440, the haptic feedback data which is in sync with the stable video is generated based on the at least parts of the motion information. To be more specific, in an embodiment of the present disclosure, the low (above 0.2 Hz), medium and high frequency motion that are removed or cancelled from the raw video are rendered to the user 180 as the haptic feedback via the haptic feedback device 150.

In this way, the user experiences a smooth video but at the same time can feel all the rich motion dynamics as captured in the original video. Further, the method further comprises, according to an embodiment, the step S450 where the stable video and haptic feedback based on the haptic feedback data may be synchronously rendered to the user 180. With this step, the first electronic device presents the user 180 with stable video and haptic feedback in sync.

Fig. 5 schematically shows an embodiment of an arrangement 500 which may be used in the first electronic device (either the receiving electronic device 110 and/or the transmitting electronic device 120) according to an embodiment of the present disclosure.

Comprised in the arrangement 500 are a processing unit 506, e.g., with a Digital Signal Processor (DSP). The processing unit 506 may be a single unit or a plurality of units to perform different actions of procedures described herein. The arrangement 500 may also comprise an input unit 502 for receiving signals from other entities, and an output unit 504 for providing signal(s) to other entities. The input unit 502 and the output unit 504 may be arranged as an integrated entity or as separate entities. Furthermore, the arrangement 500 may comprise at least one computer program product 508 in the form of a non-volatile or volatile memory, e.g., an Electrically Erasable Programmable Read-Only Memory (EEPROM), a flash memory and/or a hard drive. The computer program product 508 comprises a computer program 510, which comprises code/computer readable instructions, which when executed by the processing unit 506 in the arrangement 500 causes the arrangement 500 and/or the first electronic device (the receiving electronic device 110 and/or the transmitting electronic device 120) in which it is comprised to perform the actions, e.g., of the procedure described earlier in conjunction with Fig. 2, Fig. 3, Fig. 4 or any other variant.

The computer program 510 may be configured as a computer program code structured in computer program modules 51 OA - 510D. Hence, in an exemplifying embodiment when the arrangement 500 is used in the first electronic device 110, the code in the computer program of the arrangement 500 includes: a determining module 51 OA, for determining motion information from raw video. The code in the computer program further includes a first generating module 510B, for generating stable video by compensating for at least parts of motion in the raw video based on at least parts of the motion information. The code in the computer program further includes a second generating unit 510C, for generating the haptic feedback data which is synchronized with the stable video based on the at least parts of the motion information. The code in the computer program 510 may comprise further optional modules, illustrated as module 510D, e.g. for synchronously rendering the stable video and haptic feedback based on the haptic feedback data when the arrangement 500 is used in the receiving electronic device 110.

The computer program modules could essentially perform the actions of the flow illustrated in Figs. 2-4, to emulate the receiving electronic device 110 and/or the transmitting electronic device 120. In other words, when the different computer program modules are executed in the processing unit 506, they may correspond to different modules in the transmitting electronic device 120 and/or the receiving electronic device 110.

Although the code means in the embodiments disclosed above in conjunction with Fig. 5 are implemented as computer program modules which when executed in the processing unit causes the arrangement to perform the actions described above in conjunction with the figures mentioned above, at least one of the code means may in alternative embodiments be implemented at least partly as hardware circuits. The processor may be a single CPU (Central processing unit), but could also comprise two or more processing units. For example, the processor may include general purpose microprocessors; instruction set processors and/or related chips sets and/or special purpose microprocessors such as Application Specific Integrated Circuit (ASICs). The processor may also comprise board memory for caching purposes. The computer program may be carried by a computer program product connected to the processor. The computer program product may comprise a computer readable medium on which the computer program is stored. For example, the computer program product may be a flash memory, a Random-access memory (RAM), a Read-Only Memory (ROM), or an EEPROM, and the computer program modules described above could in alternative embodiments be distributed on different computer program products in the form of memories within the UE.

The present disclosure is described above with reference to the embodiments thereof. However, those embodiments are provided just for illustrative purpose, rather than limiting the present disclosure. The scope of the disclosure is defined by the attached claims as well as equivalents thereof. Those skilled in the art can make various alternations and modifications without departing from the scope of the disclosure, which all fall into the scope of the disclosure.