Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR INTEGRATING THREE DIMENSIONAL VIDEO AND GALVANIC VESTIBULAR STIMULATION
Document Type and Number:
WIPO Patent Application WO/2017/112593
Kind Code:
A1
Abstract:
A method for providing galvanic vestibular stimulation corresponding to accelerations in scenes of a video. Embodiments include (1) receiving a three dimensional video signal including three dimensional video angular velocity information, (2) receiving a three dimensional head orientation signal including three dimensional head angular velocity information, (3) calculating resultant three dimensional angular velocity information based on the video angular velocity information and the head angular velocity information, (4) calculating three dimensional acceleration information based on the resultant three dimensional angular velocity information, and (5) generating three dimensional galvanic vestibular stimulation signals corresponding to the resultant three dimensional angular acceleration information, wherein the three dimensional stimulation signals will stimulate three dimensional acceleration sensations corresponding to accelerations on the user viewing the displayed scenes.

Inventors:
PRADHAN GAURAV N (US)
CEVETTE MICHAEL J (US)
STEPANEK JAN (US)
Application Number:
PCT/US2016/067560
Publication Date:
June 29, 2017
Filing Date:
December 19, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAYO FOUNDATION (US)
International Classes:
H04N13/04; H04N5/222
Foreign References:
US20030073922A12003-04-17
US20100113150A12010-05-06
US20110029045A12011-02-03
JP2011523515A2011-08-11
US20130171596A12013-07-04
US5762612A1998-06-09
US20110044604A12011-02-24
US8718796B22014-05-06
US20110029045A12011-02-03
US20110044604A12011-02-24
Other References:
See also references of EP 3395067A4
Attorney, Agent or Firm:
LINDER, Walter C. et al. (US)
Download PDF:
Claims:
CLAIMS

The following is claimed:

1. A method for providing galvanic vestibular stimulation corresponding to accelerations in scenes of a video, comprising:

receiving a three dimensional video signal including three dimensional video angular velocity information, wherein the video signal is representative of scenes that can be displayed to a user;

receiving a three dimensional head orientation signal including three dimensional head angular velocity information, wherein the head orientation signal is representative of the of the orientation of the head of the user viewing the displayed scenes; calculating resultant three dimensional angular velocity information based on the video angular velocity information and the head angular velocity information, wherein the resultant three dimensional angular velocity information is representative of three dimensional angular velocities of the user with respect to the viewed scenes; calculating three dimensional acceleration information based on the resultant three

dimensional angular velocity information, wherein the three dimensional acceleration information is representative of three dimensional accelerations of the user viewing the scenes; and

generating three dimensional galvanic vestibular stimulation signals corresponding to the resultant three dimensional angular acceleration information, wherein the three dimensional stimulation signals will stimulate three dimensional acceleration sensations corresponding to accelerations on the user viewing the displayed scenes.

2. The method of claim 1 wherein the three dimensions represent velocities or accelerations about at least three axes.

3. The method of claim 2 wherein the at least three axes include pitch, yaw and roll axes.

4. The method of claim 3 and further including:

receiving a two dimensional video signal; and

generating the three dimensional video signal based on the two dimensional video signal.

5. The method of claim 2 and further including:

receiving a two dimensional video signal; and

generating the three dimensional video signal based on the two dimensional video signal.

6. The method of claim 1 and further including:

receiving a two dimensional video signal; and

generating the three dimensional video signal based on the two dimensional video signal.

7. The method of claim 6 and further including:

displaying the scenes represented by the video signal to a user; and

applying the three dimensional stimulation signals to the user while displaying the

scenes.

8. The method of claim 1 and further including:

displaying the scenes represented by the video signal to a user; and

applying the three dimensional stimulation signals to the user while displaying the

scenes.

9. A system for providing integrated three dimensional (3D) video and

corresponding 3D galvanic vestibular stimulation (GVS), comprising:

a 3D visual stimulation system including:

a virtual environment user system;

a visual display coupled to the virtual environment user system; and a head tracker coupled to the virtual environment user system;

a 3D angular velocity calculator coupled to the 3D visual stimulation system;

a 3D acceleration calculator coupled to the 3D angular velocity calculator; and a 3D GVS command generator coupled to the 3D acceleration calculator.

10. The system of claim 9 and further including a 3D video converter coupled to the 3D visual stimulation system.

11. The system of claim 10 and further including a two dimensional video source coupled to the 3D video converter.

12. The system of claim 11 wherein the three dimensions represent velocities or accelerations about at least three axes.

13. The system of claim 12 wherein the at least three axes include pitch, yaw and roll axes.

14. The system of claim 9 wherein the three dimensions represent velocities or accelerations about at least three axes.

15. The system of claim 14 wherein the at least three axes include pitch, yaw and roll axes.

Description:
SYSTEM AND METHOD FOR INTEGRATING

THREE DIMENSIONAL VIDEO AND GALVANIC VESTIBULAR STIMULATION

FIELD OF THE INVENTION

[0001] The invention relates generally to virtual reality stimulation systems. In particular, the invention relates to the integration of galvanic vestibular stimulation with three dimensional video.

BACKGROUND

[0002] So-called virtual reality or enhanced reality systems such as the Rift device developed by Oculus VR present three dimensional visual stimulation to a user by displaying three dimensional video on a headset. Information representative of the orientation and/or movement of the user's head is tracked by a tracking device, and that information is used by the system to control the display of the video (e.g., so the scenes in the displayed video properly correspond to those in the direction that the user turns his or her head). It is also known to provide galvanic vestibular stimulation (GVS) in conjunction with the visual stimulation in order to provide the user with a sense of motion or acceleration to enhance the virtual experience. Systems of these types are disclosed, for example, in the Campbell U.S. Patent 5,762,612, LaValle et al. U.S. Patent 9,063,30, and the Brokken et al. U.S. Patent Application Publication 2011/0044604, all of which are incorporated herein by reference in their entireties and for all purposes. GVS systems are disclosed, for example, in the Cevette et al. U.S. Patent 8,718,796, which is incorporated herein by reference in its entirety and for all purposes. There remains, however, a continuing need for enhanced virtual reality systems. In particular, there is need for such systems that provide enhanced integrated vestibular stimulation corresponding to the visual stimulation.

SUMMARY

[0003] Embodiments of the invention include a method for providing galvanic vestibular stimulation corresponding to accelerations in scenes of a video. In embodiments, the method comprises (1) receiving a three dimensional video signal including three dimensional video angular velocity information, wherein the video signal is representative of scenes that can be displayed to a user; (2) receiving a three dimensional head orientation signal including three dimensional head angular velocity information, wherein the head orientation signal is representative of the of the orientation of the head of the user viewing the displayed scenes; (3) calculating resultant three dimensional angular velocity information based on the video angular velocity information and the head angular velocity information, wherein the resultant three dimensional angular velocity information is representative of three dimensional angular velocities of the user with respect to the viewed scenes; (4) calculating three dimensional acceleration information based on the resultant three dimensional angular velocity information, wherein the three dimensional acceleration information is representative of three dimensional accelerations of the user viewing the scenes; and (5) generating three dimensional galvanic vestibular stimulation signals corresponding to the resultant three dimensional angular acceleration information, wherein the three dimensional stimulation signals will stimulate three dimensional acceleration sensations corresponding to accelerations on the user viewing the displayed scenes. The three dimensions represent velocities or accelerations about at least three axes, such as pitch, yaw and roll, in embodiments. Embodiments also include (1) receiving a two dimensional video signal; and (2) generating the three dimensional video signal based on the two dimensional video signal.

DESCRIPTION OF THE DRAWINGS

[0004] FIG. 1 is a block diagram of a three dimensional video and galvanic vestibular stimulation system in accordance with embodiments of the invention.

DESCRIPTION OF THE INVENTION

[0005] A system 10 and method to provide integrated three dimensional (3D) video and corresponding 3D galvanic vestibular stimulation (GVS) in accordance with embodiments of the invention can be described with reference to FIG. 1. As shown, the system 10 includes two dimensional (2D) video source 12, 3D video converter 14, 3D visual stimulation system 16, resultant 3D angular velocity calculator 18 and GVS stimulation system 20. 2D video source 12 can be any desired source of a 2D video data or signal. Examples of video source 12 include a TV receiver, a DVD or other playback device, video game console, drone camera or other camera that provides 2D video signals representative of video visual scenes such as TV programming, movies, video game fields and scenes captured by a camera. In embodiments, the output of the 2D video source 12 is a composite video signal. The 2D video signal provided by the video source 12 is coupled to the 3D video converter 14. 3D video converter 14 is configured to receive a video signal over a Universal Serial Bus (USB) in embodiments of the invention. In such embodiments configured for use with a 2D video source 12 that outputs a composite video signal, a composite-to-USB converter (not shown) can be coupled between the 2D video source and the 3D video converter 14. 2D video source 12 can provide video in other formats and 3D video converter 14 can receive video in other formats in other embodiments of the invention. [0006] 3D video converter 14 converts the received 2D video signal into a corresponding

3D video data or signal. In embodiments, the 3D video converter 14 includes a computer executing Open GL (Open Graphics Library). Other embodiments are based on other 3D video formats. The 3D video signal generated by video converter 14 includes data representative of 3D video angular velocity information characteristic of the motion in the corresponding scenes of the 3D video. For example, the 3D video angular velocity information can be numerically expressed as angular velocities in each of the pitch, yaw and roll directions and represented in matrix form. In embodiments, the 3D video signal produced by the video converter 14 has a format compatible with the 3D visual stimulation system 16. Other embodiments (not shown) include a converter for converting the 3D video signal produced by the video converter 14 to a format compatible with the stimulation system 16.

[0007] 3D visual stimulation system 16 includes a virtual environment user system 30, visual display 32 and head tracker 34 in the illustrated embodiment. Visual display 32 is driven by the virtual environment user system 30 and provides a 3D visual display of the 3D video received from the 3D video converter 14. In embodiments, the visual display 32 is an Oculus Rift head-mounted display of the type developed by Oculus VR. Other virtual reality display systems are used in other embodiments. Head tracker 34 generates, and provides to the virtual environment user system 30, information or signals representative of the orientation of the user's head or eyes while the user is viewing the video on the visual display 32. Head tracker 34 can, for example, be built into the visual display 32 (e.g., including accelerometers that detect motion), and/or can be external devices such as sensors. The head orientation information provided by head tracker 34 can be in the form of 3D head angular velocities representative of the user's head motion. In embodiments, the head angular velocity information provided by head tracker 34 is numerically expressed as angular velocities in each of the pitch, yaw and roll directions of the head, and is represented in matrix form. Virtual environment user system 30, which can, for example, be a video game console or a general purpose computer with video driver hardware and software, drives the visual display 32 in response to the received video signal from 3D video converter 14 and the head orientation information provided by head tracker 34 to provide the user with a visual virtual reality environment corresponding to the video and the movement of his or her head.

[0008] Resultant angular velocity calculator 18 receives the 3D video angular velocity information and the 3D head angular velocity information. In the illustrated embodiment, this video and head angular velocity information is received through the virtual environment user system 30. In other embodiments (not shown), either or both of the 3D video and head angular velocity information can be received directly from the 3D video converter 14 or the head tracker 34, respectively. Angular velocity calculator 18 calculates resultant 3D angular velocities on the basis of both the video and head motion angular velocities (i.e., the corresponding angular velocities of the user with respect to the viewed scenes of the video), and provides signals representative of that information. In embodiments, the resultant 3D angular velocities are expressed as angular velocities in each of the pitch, yaw and roll directions, and are represented in matrix form. Angular velocity calculator 18 is a programmed computer in embodiments.

GVS stimulation system 20 is coupled to receive the resultant 3D angular velocity information from the angular velocity calculator 18, and generates 3D GVS commands or signals that can be applied to the user through electrodes (not shown). The 3D GVS signals stimulate 3D acceleration sensations that correspond to the accelerations on the user viewing the video scenes. The illustrated embodiment of GVS stimulation system 20 includes 3D acceleration calculator 40 and GVS command generator 42. 3D acceleration calculator 40 processes the resultant 3D angular velocity information and calculates 3D acceleration information associated with the angular velocity information (e.g., determines the changes in the 3D angular velocities). In embodiments, the 3D acceleration information represents accelerations in each of the pitch, yaw and roll directions. 3D acceleration calculator 40 is a programmed computer in

embodiments.

[0009] GVS command generator 42 converts the 3D acceleration information into electrical signals suitable and appropriate for application to the user's body (e.g., through electrodes) that will stimulate the corresponding 3D acceleration sensations in the user. In embodiments, command generator 42 generates GVS signals that provide stimulation

representative of acceleration in each of the pitch, yaw and roll dimensions. In embodiments, GVS stimulation system 20 can be a four channel system that provides motion perception through four electrodes placed on the left mastoid, forehead, right mastoid, and the nape of the neck. The amplitude, locations and/or timing of the commands applied to the various electrodes can be controlled to provide GVS commands in real time in response to the 3D angular velocity information provided by angular velocity calculator 18, thereby stimulating motion perceptions corresponding to the visual stimulation presented to the user. By way of example, approaches such as those disclosed in the above-identified Cevette et al. U.S. Patent 8,718,796 can be used for this purpose. Other embodiments include other electrode numbers and placement configurations, and other stimulation approaches

[0010] In embodiments, 3D video converter 14, virtual environment user system 30, resultant 3D angular velocity calculator 18 and 3D acceleration calculator 40 can be

implemented by separate computer systems or processors. In other embodiments, 3D video converter 14, virtual environment user system 30, resultant 3D angular velocity calculator 18 and 3D acceleration calculator 40 represent logical functions, and all or some or all of these functions can be performed by other computer system architectures. For example, the functions performed by 3D video converter 14, virtual environment user system 30 and resultant angular velocity calculator 18 can be provided by a GVS motion app on one computer system.

[0011] Embodiments of the invention offer important advantages. For example, they can enhance the virtual reality experience of users by providing motion stimulation that closely represents the motion corresponding to the video scene. The system and method are flexible in that they can be used in connection with any desired 2D video source. The system and method are also efficient to implement.

[0012] Although the invention has been described with reference to preferred

embodiments, those skilled in the art will recognize that changes can be made in form and detail without departing from the spirit and scope of the invention. For example, as used herein, the term "virtual reality" is intended to broadly encompass any system and method that provides at least in part electronically generated stimulation to a user, including for example so-called "augmented reality" environments.