Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR PROVIDING ROTATIONAL MOTION CORRECTION
Document Type and Number:
WIPO Patent Application WO/2019/222059
Kind Code:
A1
Abstract:
Video information and rotational position information may be obtained. The video information may define spherical video content having a progress length and captured by image capture device(s) during a capture duration. The rotational position information may characterize rotational positions of the image capture device(s) during the capture duration. The rotational positions of the image capture device(s) during the capture duration may be determined based on the rotational position information. The spherical video content may be rotated based on the rotational positions of the image capture device(s) during the capture duration. The rotation of the spherical video content may include rotation of one or more spherical video frames of the spherical video to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the spherical video content.

Inventors:
COTOROS INGRID A (US)
ARNOUX MARTIN (US)
BARNA SANDOR LEE (US)
BRUNEAU GARANCE (US)
Application Number:
PCT/US2019/031840
Publication Date:
November 21, 2019
Filing Date:
May 10, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOPRO INC (US)
International Classes:
G06T3/60; H04N5/232
Domestic Patent References:
WO2017017675A12017-02-02
Foreign References:
US9277122B12016-03-01
US20170180647A12017-06-22
US9241103B22016-01-19
US20180036632A12018-02-08
US20170287107A12017-10-05
Attorney, Agent or Firm:
ESPLIN, D. Benjamin et al. (US)
Download PDF:
Claims:
What is claimed is:

1. A system that provides rotational motion correction for spherical videos, the system comprising:

one or more physical processors configured by machine-readable instructions to:

obtain video information defining spherical video content, the spherical video content having a progress length, the spherical video content including spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length, the spherical video frames corresponding to moments within the progress length, the spherical video frames including a given spherical video frame corresponding to a given moment within the progress length, wherein the spherical video content is captured by one or more image capture devices during a capture duration; obtain rotational position information of the one or more image capture devices, the rotational position information characterizing rotational positions of the one or more image capture devices during the capture duration;

determine the rotational positions of the one or more image capture devices during the capture duration based on the rotational position information, the rotational positions of the one or more image capture devices including a given rotational position of the one or more image capture devices for the given moment within the progress length; and

rotate the spherical video content based on the rotational positions of the one or more image capture devices during the capture duration such that the given spherical video frame is rotated based on the given rotational position of the one or more image capture devices, the rotation of the spherical video content including rotation of one or more of the spherical video frames to compensate for the rotational positions of the one or more image capture devices during the capture duration and to stabilize the spherical video content.

2. The system of claim 1 , wherein the rotational position information is generated by a set of motion sensors.

3. The system of claim 2, wherein the set of motion sensors includes an accelerometer and a gyroscope.

4. The system of claim 3, wherein the accelerometer generates accelerometer information, and the gyroscope generates gyroscope information based on the rotational positions of the one or more image capture devices during the capture duration, the rotational position information is determined based on the

accelerometer information and the gyroscope information, and one or more portions of the accelerometer information used to correct a long-term drift in one or more portions of the gyroscope information.

5. The system of claim 1 , wherein one or more image sensors of the one or more image capture devices utilize a rolling shutter during the capture of the spherical video content and the one or more physical processors are further configured by the machine-readable instructions to warp the spherical video content based on the rotational positions of the one or more image capture devices during the capture duration such that the given spherical video frame is warped based on the given rotational position of the one or more image capture devices, the warping of the spherical video content including warping of one or more spherical video frames to compensate for the rolling shutter of the one or more image sensors during the capture duration and to provide rolling shutter correction.

6. The system of claim 5, wherein the spherical video content is warped further based on acquisition information for the spherical video content, the acquisition information characterizing one or more exposure times of the one or more image capture devices used to capture the spherical video content.

7. The system of claim 1 , wherein the one or more physical processors are further configured by the machine-readable instructions to: obtain a first viewing direction for presentation of the stabilized spherical video content, the first viewing direction associated with a first user;

obtain a second viewing direction for presentation of the stabilized spherical video content, the second viewing direction associated with a second user, the second viewing direction different from the first viewing direction; and

effectuate simultaneous presentation of the spherical video content to the first user and the second user, the simultaneous presentation of the stabilized spherical video content including presentation of the stabilized spherical video content to the first user based on the first viewing direction and presentation of the stabilized spherical video content to the second user based on the second viewing direction.

8. The system of claim 7, wherein the presentation of the stabilized spherical video content to the first user and the presentation of the stabilized spherical video content to the second user are synchronized such that a particular moment in the progress length of the stabilized spherical video content is presented to the first user and the second user at a same time, the presentation of the stabilized spherical video content corresponding to the particular moment to the first user and the second user including presentations of different portions of the visual content viewable from the point of view based on a difference between the first viewing direction and the second viewing direction.

9. The system of claim 8, wherein the one or more image capture devices are carried by a vehicle during the capture duration, and the simultaneous presentation of the stabilized spherical video content is effectuated during the capture duration.

10. The system of claim 9, wherein the first user has control over motion of the vehicle and the second user does not have the control over the motion of the vehicle.

1 1. The system of claim 1 , wherein the one or more image capture devices include a given image capture device, the given image capture device comprising: a housing; a first image sensor carried by the housing and configured to generate a first output signal conveying first visual information based on light that becomes incident thereon;

a second image sensor carried by the housing and configured to generate a second output signal conveying second visual information based on light that becomes incident thereon;

a first optical element configured to guide light within a first field of view to the first image sensor, the first field of view being greater than 180 degrees, the first optical element being carried by the housing; and

a second optical element configured to guide light within a second field of view to the second image sensor, the second field of view being greater than 180 degrees, the second optical element being carried by the housing such that a peripheral portion of the first field of view and a peripheral portion of the second field of view overlap, the overlap of the peripheral portion of the first field of view and the peripheral portion of the second field of view enabling spherical capture of visual content based on the first visual information and the second visual information; wherein the given image capture device is configured to switch between a spherical capture mode and a non-spherical capture mode, the given image capture device operating in the spherical capture mode including a first processing resource for the first image sensor and a second processing resource for the second image sensor being in operation for the spherical capture of the visual content and the given image capture device operating in the non-spherical capture mode including the first processing resource for the first image sensor being in operation for capture of first reduced visual content based on the first visual information and the second processing resource for the second image sensor not being in operation for capture of second reduced visual content based on the second visual information.

12. The system of claim 1 1 , wherein the given image capture device is configured to switch between the spherical capture mode and the non-spherical capture mode based on the second visual information.

13. The system of claim 1 1 , wherein the capture of the first reduced visual content based on the first visual information during the non-spherical capture mode enable presentation of a stabilized view of the first reduced visual content, the stabilized view having a viewing angle of at least 130 degrees.

14. The system of claim 1 1 , wherein the second processing resource is in operation for the capture of first reduced visual content during the non-spherical capture mode.

15. A method for providing rotational motion correction for spherical videos, the method performed by a computing system including one or more processors, the method comprising:

obtaining, by the computing system, video information defining spherical video content, the spherical video content having a progress length, the spherical video content including spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length, the spherical video frames corresponding to moments within the progress length, the spherical video frames including a given spherical video frame corresponding to a given moment within the progress length, wherein the spherical video content is captured by one or more image capture devices during a capture duration;

obtaining, by the computing system, rotational position information of the one or more image capture devices, the rotational position information characterizing rotational positions of the one or more image capture devices during the capture duration;

determining, by the computing system, the rotational positions of the one or more image capture devices during the capture duration based on the rotational position information, the rotational positions of the one or more image capture devices including a given rotational position of the one or more image capture devices for the given moment within the progress length; and

rotating, by the computing system, the spherical video content based on the rotational positions of the one or more image capture devices during the capture duration such that the given spherical video frame is rotated based on the given rotational position of the one or more image capture devices, the rotation of the spherical video content including rotation of one or more of the spherical video frames to compensate for the rotational positions of the one or more image capture devices during the capture duration and to stabilize the spherical video content.

16. The method of claim 15, wherein the rotational position information is generated by a set of motion sensors.

17. The method of claim 16, wherein the set of motion sensors includes an accelerometer and a gyroscope.

18. The method of claim 17, wherein the accelerometer generates accelerometer information, and the gyroscope generates gyroscope information based on the rotational positions of the one or more image capture devices during the capture duration, the rotational position information is determined based on the

accelerometer information and the gyroscope information, and one or more portions of the accelerometer information used to correct a long-term drift in one or more portions of the gyroscope information.

19. The method of claim 15, wherein one or more image sensors of the one or more image capture devices utilize a rolling shutter during the capture of the spherical video content and the method further comprises warping, by the computing system, the spherical video content based on the rotational positions of the one or more image capture devices during the capture duration such that the given spherical video frame is warped based on the given rotational position of the one or more image capture devices, the warping of the spherical video content including warping of one or more of the spherical video frames to compensate for the rolling shutter of the one or more image sensors during the capture duration and to provide rolling shutter correction.

20. The method of claim 19, wherein the spherical video content is warped further based on acquisition information for the spherical video content, the acquisition information characterizing one or more exposure times of the one or more image capture devices used to capture the spherical video content.

Description:
SYSTEMS AND METHODS FOR PROVIDING ROTATIONAL MOTION

CORRECTION

FIELD

(01) This disclosure relates to providing rotational motion correction based on rotational positions of image capture device(s) during capture of video content.

BACKGROUND

(02) Capture of video content (e.g., spherical video content) by one or more image capture devices may include artifacts due to rotational motion of the image capture device(s). Rotational motion of the image capture device(s) during the capture of the video content may cause the playback of the video content to appear jerky /shaky.

SUMMARY

(03) This disclosure relates to providing rotational motion correction for video content. Video information, rotational position information, and/or other information may be obtained. The video information may define spherical video content having a progress length. The spherical video content may be captured by one or more image capture devices during a capture duration. The spherical video content may include spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length of the spherical video content. The spherical video frames may correspond to moments within the progress length. The spherical video frames may include a given spherical video frame corresponding to a given moment within the progress length. The rotational position information may characterize rotational positions of the image capture device(s) during the capture duration.

(04) The rotational positions of the image capture device(s) during the capture duration may be determined based on the rotational position information and/or other information. The rotational positions of the image capture device(s) may include a given rotational position of the image capture device(s) for the given moment within the progress length. The spherical video content may be rotated based on the rotational positions of the image capture device(s) during the capture duration and/or other information. The spherical video content may be rotated such that the given spherical video frame is rotated based on the given rotational position of the image capture device(s) and/or other information. The rotation of the spherical video content may include rotation of one or more of the spherical video frames to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the spherical video content.

(05) A system that provides rotational motion correction for video content may include one or more electronic storage, one or more processors, and/or other components. An electronic storage may store video information defining video content (e.g., spherical video content), rotational position information, and/or other information. Video content may refer to media content that may be consumed as one or more videos. Video content may include one or more videos stored in one or more formats/containers, and/or other video content. Video content may have a progress length. Video content may define visual content viewable as a function of progress through the progress length of the video content. Video content may include video frames that define visual content. That is, visual content of the video content may be included within video frames of the video content. Video content may include spherical video content and/or other video content. Spherical video content may include spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length of the spherical video content. The video frames (e.g., spherical video frames) may correspond to different moments (points in time, durations of time) within the progress length. The video frames may include a given video frame (e.g., a given spherical video frame) corresponding to a given moment within the progress length. Video content may be captured by one or more image capture devices during a capture duration. In some implementations, one or more image sensors of the image capture device(s) may utilize a rolling shutter during the capture of the video content. In some

implementations, the video content (e.g., spherical video content) may be consumed as virtual reality content.

(06) The processor(s) may be configured by machine-readable instructions.

Executing the machine-readable instructions may cause the processor(s) to facilitate providing rotational motion correction for video content. The machine-readable instructions may include one or more computer program components. The computer program components may include one or more of a video information component, a rotational position information component, a rotational position component, a rotation component, and/or other computer program components. In some implementations, the computer program components may include a warp component, a presentation component, and/or other computer program components.

(07) The video information component may be configured to obtain video information defining one or more video content (e.g., spherical video content) and/or other information. The video information component may obtain video information from one or more storage locations. The video information component may obtain video information during acquisition of the video content and/or after acquisition of the video content by one or more image sensors/image capture devices.

(08) The rotational position information component may be configured to obtain rotational position information and/or other information. The rotational position information may characterize rotational position of image capture device(s) during the capture duration of the video content (e.g., spherical video content). In some implementations, one or more portions of the rotational position information may be generated by a set of motion sensors. The set of motion sensors may include one or more accelerometers, one or more gyroscopes, and/or other motion sensors. The accelerometer may generate accelerometer information, and the gyroscope may generate gyroscope information based on the rotational positions of the image capture device(s) during the capture duration. The rotational position information may be determined based on the accelerometer information, the gyroscope information, and/or other information. One or more portions of the accelerometer information may be used to correct a long-term drift in one or more portions of the gyroscope information.

(09) The rotational position component may be configured to determine the rotational positions of the image capture device(s) during the capture duration based on the rotational position information and/or other information. The rotational positions of the image capture device(s) may include a given rotational position of the image capture device(s) for the given moment within the progress length and/or other rotational positions of the image capture device(s) for other moments within the progress length. (10) The rotation component may be configured to rotate the video content (e.g., spherical video content) based on the rotational positions of the image capture device(s) during the capture duration and/or other information. The video content may be rotated such that the given video frame (e.g., given spherical video frame) is rotated based on the given rotational position of the image capture device(s) and/or other information. The rotation of the video content may include rotation of one or more of the video frames (e.g., spherical video frames) to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the video content (e.g., spherical video content).

(11) The warp component may be configured to warp the video content (e.g., spherical video content) based on the rotational positions of the image capture device(s) during the capture duration and/or other information. The video content may be warped such that the given video frame (e.g., given spherical video frame) is warped based on the given rotational position of the image capture device(s) and/or other information. The warping of the video content may include warping of one or more video frames (e.g., spherical video frames) to compensate for the rolling shutter of the image sensor(s) during the capture duration and to provide rolling shutter correction. In some implementations, the video content may be warped further based on acquisition information for the video content and/or other information. The acquisition information may characterize one or more exposure times of the image capture device(s) used to capture the video content.

(12) The presentation component may be configured to effectuate presentation of the stabilized video content (e.g., stabilized spherical video content) on one or more displays. The presentation component may effectuate presentation of the stabilized video content on the display(s) based on one or more viewing directions and/or other information. A viewing direction may define a direction of a view for the stabilized video content (e.g., from the point of view of the stabilized spherical video content) as the function of progress through the progress length of the stabilized video content.

(13) The presentation component may be configured to effectuate simultaneous presentation of the stabilized video content (e.g., stabilized spherical video content) to multiple users. The presentation component may obtain a first viewing direction and a second viewing direction for presentation of the stabilized video content. The first viewing direction may be associated with a first user, and the second viewing direction may be associated with a second user. The second viewing direction may be different from the first viewing direction. The presentation component may be configured to effectuate simultaneous presentation of the stabilized video content to the first user and the second user. The simultaneous presentation of the stabilized video content may include presentation of the stabilized video content to the first user based on the first viewing direction and/or other information, and presentation of the stabilized video content to the second user based on the second viewing direction and/or other information.

(14) In some implementations, the presentation of the stabilized video content to the first user and to the second user may be synchronized such that a particular moment in the progress length of the stabilized video content is presented to the first user and the second user at a same time. The presentation of the stabilized video content corresponding to the particular moment for the first user, and the second user may include presentations of different portions of the visual content of the video content (e.g., different visual content viewable from the point of view of the stabilized spherical video content) based on a difference between the first viewing direction and the second viewing direction.

(15) In some implementations, the image capture device(s) may be carried by a vehicle during the capture duration, and the simultaneous presentation of the stabilized video content may be effectuated during the capture duration. In some implementations, the first user may have control over motion of the vehicle, and the second user may not have the control over the motion of the vehicle.

(16) In some implementations, the image capture device(s) may include a given image capture device and/or other image capture devices. The given image capture device may comprise a housing; a first image sensor carried by the housing and configured to generate a first output signal conveying first visual information based on light that becomes incident thereon; a second image sensor carried by the housing and configured to generate a second output signal conveying second visual information based on light that becomes incident thereon; a first optical element carried by the housing and configured to guide light within a first field of view to the first image sensor, the first field of view being greater than 180-degrees; and a second optical element carried by the housing and configured to guide light within a second field of view to the second image sensor, the second field of view being greater than 180-degrees. The second optical element may be carried by the housing such that a peripheral portion of the first field of view and a peripheral portion of the second field of view overlap. The overlap of the peripheral portion of the first field of view and the peripheral portion of the second field of view may enable spherical capture of visual content based on the first visual information and the second visual information.

(17) The given image capture device may be configured to switch between a spherical capture mode, a non-spherical capture mode, and/or other capture modes. The given image capture device operating in the spherical capture mode may include a first processing resource for the first image sensor and a second

processing resource for the second image sensor being in operation for the spherical capture of the visual content. The given image capture device operating in the non- spherical capture mode may include the first processing resource for the first image sensor being in operation for capture of first reduced visual content based on the first visual information and the second processing resource for the second image sensor not being in operation for capture of second reduced visual content based on the second visual information. The second processing resource may be in operation for the capture of first reduced visual content during the non-spherical capture mode.

(18) In some implementations, the given image capture device may be configured to switch between the spherical capture mode, the non-spherical capture mode, and/or other capture modes based on the second visual information and/or other information.

(19) In some implementations, the capture of the first reduced visual content based on the first visual information during the non-spherical capture mode may enable presentation of a stabilized view of the first reduced visual content. The stabilized view may have a viewing angle of at least 130-degrees.

(20) These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly

understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of“a,”“an,” and“the” include plural referents unless the context clearly dictates otherwise.

BRIEF DESCRIPTION OF THE DRAWINGS

(21) FIG. 1 illustrates a system that provides rotational motion correction for video content.

(22) FIG. 2 illustrates a method for providing rotational motion correction for video content.

(23) FIG. 3 illustrates an example spherical video content.

(24) FIG. 4 illustrates example viewing directions for spherical video content.

(25) FIGS. 5A-5B illustrate example extents of spherical video content.

(26) FIG. 6 illustrates an example image capture device.

(27) FIG. 7 A illustrates an example rotation of spherical video content based on image capture device rotation.

(28) FIG. 7B illustrates an example rotation of spherical video content to stabilize spherical video content.

(29) FIGS. 8-9 illustrate example viewing windows for example stabilized spherical video content.

DETAILED DESCRIPTION

(30) FIG. 1 illustrates a system 10 for providing rotational motion correction for video content. The system 10 may include one or more of a processor 1 1 , an interface 12 (e.g., bus, wireless interface), an electronic storage 13, and/or other components. Video information, rotational position information, and/or other information may be obtained by the processor 1 1. The video information may define spherical video content having a progress length. The spherical video content may be captured by one or more image capture devices during a capture duration. The spherical video content may include spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length of the spherical video content. The spherical video frames may correspond to moments within the progress length. The spherical video frames may include a given spherical video frame corresponding to a given moment within the progress length. The rotational position information may characterize rotational positions of the image capture device(s) during the capture duration.

(31) The rotational positions of the image capture device(s) during the capture duration may be determined based on the rotational position information and/or other information. The rotational positions of the image capture device(s) may include a given rotational position of the image capture device(s) for the given moment within the progress length. The spherical video content may be rotated based on the rotational positions of the image capture device(s) during the capture duration and/or other information. The spherical video content may be rotated such that the given spherical video frame is rotated based on the given rotational position of the image capture device(s) and/or other information. The rotation of the spherical video content may include rotation of one or more of the spherical video frames to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the spherical video content.

(32) The electronic storage 13 may be configured to include electronic storage medium that electronically stores information. The electronic storage 13 may store software algorithms, information determined by the processor 1 1 , information received remotely, and/or other information that enables the system 10 to function properly. For example, the electronic storage 13 may store information relating to video content (e.g., spherical video content), image capture device(s), rotations of image capture device(s), motion sensor(s), rotations of video content, and/or other information.

(33) Video content may refer to media content that may be consumed as one or more videos/video clips. Video content may include one or more videos/video clips stored in one or more formats/containers, and/or other video content. A format may refer to one or more ways in which the information defining video content is arranged/laid out (e.g., file format). A container may refer to one or more ways in which information defining video content is arranged/laid out in association with other information (e.g., wrapper format).

(34) Video content may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by different video capture devices. Video content may include multiple video clips captured at the same time and/or multiple video clips captured at different times. Video content may include a video clip processed by a video application, multiple video clips processed by a video application, and/or multiple video clips processed by different video applications. Video content may be captured by one or more image capture devices during one or more capture durations. In some implementations, one or more image sensors of the image capture device(s) may utilize a rolling shutter during the capture of the video content. Rolling shutter correction may be applied to compensate for rolling shutter of the image sensor(s).

(35) Video content may have a progress length. A progress length may be defined in terms of time durations and/or frame numbers. For example, video content may include a video having a time duration of 60 seconds. Video content may include a video having 1800 video frames. Video content having 1800 video frames may have a play time duration of 60 seconds when viewed at 30 frames/second. Other progress lengths, time durations, and frame numbers are contemplated.

(36) Video content may define visual content viewable as a function of progress through the progress length of the video content. Video content may include video frames that define visual content. That is, visual content of the video content may be included within video frames of the video content. The video frames may correspond to different moments within the progress length. The video frames may include a given video frame corresponding to a given moment within the progress length.

(37) In some implementations, video content may include one or more spherical video content, virtual reality content, and/or other video content. Spherical video content and/or virtual reality content may define visual content viewable from a point of view as a function of progress through the progress length of the spherical video/virtual reality content.

(38) Spherical video content may refer to a video capture of multiple views from a location. Spherical video content may include a full spherical video capture (360- degrees of capture, including opposite poles) or a partial spherical video capture (less than 360-degrees of capture). Spherical video content may be captured through the use of one or more image capture devices (e.g., cameras, image sensors, optical elements) to capture images/videos from a location. Spherical video content may be generated based on light received within a field of view of a single image sensor or within fields of view of multiple image sensors during a capture duration. For example, multiple images/videos captured by multiple cameras/image sensors may be combined/stitched together to form the spherical video content. The field(s) of view of camera(s)/image sensor(s) may be moved/rotated (e.g., via movement/rotation of optical element(s), such as lens, of the image sensor(s)) to capture multiple images/videos from a location, which may be combined/stitched together to form the spherical video content.

(39) Spherical video content may include spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length of the spherical video content. That is, visual content of the spherical video content may be included within spherical video frames of the spherical video content.

(40) A spherical video frame may include a spherical image of the spherical video content at a moment within the progress length of the spherical video content. For example, multiple images captured by multiple cameras/images sensors at a moment in time may be combined/stitched together to form a spherical video frame for the moment in time. A spherical video frame may include a full spherical image capture (360-degrees of capture, including opposite poles) or a particular spherical image capture (less than 360-degrees of capture). A spherical image (e.g., spherical video frame) may be comprised of multiple sub-images (sub-frames). Sub-images may be generated by a single image sensor (e.g., at different times as the field of view of the image sensor is rotated) and/or by multiple image sensors (e.g., individual sub-images for a moment in time captured by individual image sensors and combined/stitched together to form the spherical image).

(41) In some implementations, spherical video content may be stored with a 5.2K resolution. Using a 5.2K spherical video content may enable viewing windows (e.g., directed to a portion of a spherical video frame) for the spherical video content with resolution close to 1080p. In some implementations, spherical video content may include 12-bit video frames.

(42) In some implementations, video content (e.g., spherical video content) may be consumed as virtual reality content. Virtual reality content may refer to content (e.g., spherical video content) that may be consumed via virtual reality experience. Virtual reality content may associate different directions within the virtual reality content with different viewing directions, and a user may view a particular direction within the virtual reality content by looking in a particular direction. For example, a user may use a virtual reality headset to change the user’s direction of view. The user’s direction of view may correspond to a particular direction of view within the virtual reality content. For example, a forward-looking direction of view for a user may correspond to a forward direction of view within the virtual reality content.

(43) Spherical video content and/or virtual reality content may have been captured at one or more locations. For example, spherical video content and/or virtual reality content may have been captured from a stationary position (e.g., a seat in a stadium). Spherical video content and/or virtual reality content may have been captured from a moving position (e.g., a moving bike). Spherical video content and/or virtual reality content may include video capture from a path taken by the capturing device(s) in the moving position. For example, spherical video content and/or virtual reality content may include video capture from a person walking around in a music festival.

(44) FIG. 3 illustrates an example video content 300 defined by video information. The video content 300 may include spherical video content. The video content 300 may define visual content viewable from a point of view (e.g., center of sphere) as a function of progress through the progress length of the video content 300. FIG. 3 illustrates example rotational axes for the video content 300. Rotational axes for the video content 300 may include a yaw axis 310, a pitch axis 320, a roll axis 330, and/or other axes. Rotations about one or more of the yaw axis 310, the pitch axis 320, the roll axis 330, and/or other axes may define viewing directions/viewing window for the video content 300.

(45) For example, a 0-degree rotation of the video content 300 around the yaw axis 310 may correspond to a front viewing direction. A 90-degree rotation of the video content 300 around the yaw axis 310 may correspond to a right viewing direction. A 180-degree rotation of the video content 300 around the yaw axis 310 may correspond to a back-viewing direction. A -90-degree rotation of the video content 300 around the yaw axis 310 may correspond to a left viewing direction.

(46) A 0-degree rotation of the video content 300 around the pitch axis 320 may correspond to a viewing direction that is level with respect to horizon. A 45-degree rotation of the video content 300 around the pitch axis 320 may correspond to a viewing direction that is pitched up with respect to horizon by 45-degrees. A 90- degree rotation of the video content 300 around the pitch axis 320 may correspond to a viewing direction that is pitched up with respect to horizon by 90-degrees (looking up). A -45-degree rotation of the video content 300 around the pitch axis 320 may correspond to a viewing direction that is pitched down with respect to horizon by 45-degrees. A -90- degree rotation of the video content 300 around the pitch axis 320 may correspond to a viewing direction that is pitched down with respect to horizon by 90-degrees (looking down).

(47) A 0-degree rotation of the video content 300 around the roll axis 330 may correspond to a viewing direction that is upright. A 90-degree rotation of the video content 300 around the roll axis 330 may correspond to a viewing direction that is rotated to the right by 90-degrees. A -90-degree rotation of the video content 300 around the roll axis 330 may correspond to a viewing direction that is rotated to the left by 90-degrees. Other rotations and viewing directions are contemplated.

(48) A playback of video content (e.g., the video content 300) may include presentation of one or more portions of the video content on one or more displays based on a viewing window and/or other information. The viewing window may define extents of the visual content viewable/presented on one or more displays as the function of progress through the progress length of the video content. For spherical video content, the viewing window may define extents of the visual content viewable from the point of view as the function of progress through the progress length of the spherical video content.

(49) The viewing window may be characterized by a viewing direction, viewing size (e.g., zoom), and/or other information. A viewing direction may define a direction of view for video content. A viewing direction may define the angle/visual portion of the video content at which the viewing window is directed. A viewing direction may define a direction of view for the video content selected by a user, defined by instructions for viewing the video content, and/or determined based on other information about viewing the video content as a function of progress through the progress length of the video content. For example, a viewing direction for video content may be determined based on user input indicating where to point to viewing window or based on a director track that specifies viewing direction to be presented during playback as a function of progress through the progress length of the video content.

(50) For spherical video content, a viewing direction may define a direction of view from the point of view from which the visual content is defined. Viewing directions for the video content may be characterized by rotations around the yaw axis 310, the pitch axis 320, the roll axis 330, and/or other axes. For example, a viewing direction of a 0-degree rotation of the video content around a yaw axis (e.g., the yaw axis 310) and a 0-degree rotation of the video content around a pitch axis (e.g., the pitch axis 320) may correspond to a front viewing direction (the viewing window is directed to a forward portion of the visual content captured within the spherical video content).

(51) For example, FIG. 4 illustrates example changes in viewing direction 400 (e.g., selected by a user for video content, specified by a director’s track) as a function of progress through the progress length of the video content. The viewing directions 400 may change as a function of progress through the progress length of the video content. For example, at 0% progress mark, the viewing directions 400 may correspond to a 0-degree yaw angle and a 0-degree pitch angle. At 25% progress mark, the viewing directions 400 may correspond to a positive yaw angle and a negative pitch angle. At 50% progress mark, the viewing directions 400 may correspond to a 0-degree yaw angle and a 0-degree pitch angle. At 75% progress mark, the viewing directions 400 may correspond to a negative yaw angle and a positive pitch angle. At 87.5% progress mark, the viewing directions 400 may correspond to a 0-degree yaw angle and a 0-degree pitch angle. Other viewing directions are contemplated.

(52) A viewing size may define a size (e.g., zoom, viewing angle) of viewable extents of visual content within the video content. A viewing size may define the dimensions of the viewing window. A viewing size may define a size of viewable extents of visual content within the video content selected by a user, defined by instructions for viewing the video content, and/or determined based on other information about viewing the video content as a function of progress through the progress length of the video content. For example, a viewing size for video content may be determined based on user input indicating the size of the viewing window or based on a director track that specifies viewing size to be used during playback as a function of progress through the progress length of the video content. FIGS. 5A-5B illustrate examples of extents for the video content 300. In FIG. 5A, the size of the viewable extent of the video content 300 may correspond to the size of extent A 500. In FIG. 5B, the size of viewable extent of the video content 300 may correspond to the size of extent B 510. Viewable extent of the video content 300 in FIG. 5A may be smaller than viewable extent of the video content 300 in FIG. 5B. Other viewing sizes are contemplated.

(53) In some implementations, a viewing size may define different shapes of viewable extents. For example, a viewing window may be shaped as a rectangle, a triangle, a circle, and/or other shapes. In some implementations, a viewing size may define different rotations of the viewing window (viewing rotation). A viewing size may change based on a rotation of viewing. For example, a viewing size shaped as a rectangle may change the orientation of the rectangle based on whether a view of the video content includes a landscape view or a portrait view. Other rotations of a viewing window are contemplated.

(54) FIG. 6 illustrates an example image capture device 602. Video content may be captured by the image capture device 602 and/or other image capture devices. The image capture device may include a housing 612, and the housing 612 may carry (be attached to, support, hold, and/or otherwise carry) an optical element A 604A, an optical element B 604B, an image sensor A 606A, an image sensor B 606B, a motion sensor 608, a processor 610, and/or other components. The optical elements 604A, 604B may include instrument(s), tool(s), and/or medium that acts upon light passing through the instrument(s)/tool(s)/medium. For example, the optical elements 604A, 604B may include one or more of lens, mirror, prism, and/or other optical elements. The optical elements 604A, 604B may affect direction, deviation, and/or path of the light passing through the optical elements 604A, 604B. While the optical elements 604A, 604B are shown in a staggered configuration, this is merely an example. In some implementations, the optical elements 604A, 604B may be arranged so that their centers are aligned.

(55) The image sensors 606A, 606B may include sensor(s) that converts received light into output (electrical) signals. The image sensors 606A, 606B may generate output signals conveying information that defines one or more images (e.g., video frames of a video). For example, the image sensors 606A, 606B may include one or more of a charge-coupled device sensor, an active pixel sensor, a complementary metal-oxide semiconductor sensor, an N-type metal-oxide-semiconductor sensor, and/or other image sensors.

(56) The image sensors 606A, 606B may be configured to generate output signals conveying visual information (defining images, videos) based on light that becomes incident thereon. The optical element A 604A may be configured to guide light within a field of view to the image sensor A 606A. The optical element B 604B may be configured to guide light within a field of view to the image sensor B 606B. The fields of view of the optical elements 604A, 604B may be greater than or equal to 180- degrees. The optical elements 604A, 604B may be carried by the housing 312 such that peripheral portions of the fields of view of the optical elements 604A, 604B overlap. The overlap of the peripheral portions of the fields of view of the optical elements 604A, 604B may enable spherical capture of visual content (e.g., images and/or videos) based on the visual information conveyed by the output signals of the image sensors 606A, 606B.

(57) The motion sensor 608 may include sensor(s) that converts experienced motions into output signals. The output signals may include electrical signals. The motion sensor 608 may generate output signals conveying information that characterizes motions and/or positions of the motion sensor 608 and/or device(s) carrying the motion sensor 608. The motions/positions characterized by the motion sensor 608 may include translational motions/positions and/or rotational

motions/positions. For example, the motion sensor 608 may refer to a set of motion sensors, which may include one or more inertial measurement units, one or more accelerometers, one or more gyroscopes, and/or other motion sensors. (58) The processor 610 may include one or more processors (logic circuitry) that provide information processing capabilities in the image capture device 602. The processor 610 may provide one or more computing functions for the image capture device 602. The processor 610 may operate/send command signals to one or more components of the image capture device 602 to operate the image capture device 602. For example, the processor 610 may facilitate operation of the image capture device 602 in capturing image(s) and/or video(s), facilitate operation of the optical elements 604A, 604B (e.g., change how light is guided by the optical elements 604A, 604B), and/or facilitate operation of the image sensors 606A, 606B (e.g., change how the received light is converted into information that defines images/videos and/or how the images/videos are post-processed after capture). Other

configurations of image capture devices are contemplated.

(59) In some implementations, the image capture device 602 may be configured to switch between a spherical capture mode, a non-spherical capture mode, and/or other modes. For instance, the processor 610 may receive input(s) from a user to switch between different modes, detect conditions for which operation under one of the modes are desirable, and/or otherwise determine that the image capture device 602 should operate in a particular mode. A mode may refer to a way or a manner in which the image capture device 602 operates to capture images/videos. The spherical capture mode may refer to a way or a manner in which the image capture device 602 operates to capture spherical images/video, and non-spherical capture mode may refer to a way or a manner in which the image capture device 602 operates to capture non-spherical images/video. For example, when the image capture device 602 is operating in the spherical capture mode, the image capture device 602 may utilize light received within fields of view of both optical elements 604A, 604B to generate spherical images/videos. When the image capture device 602 is operating in the non-spherical capture mode, the image capture device 602 may utilize light received within one of the fields of view of the optical elements 604A, 604B to generate non-spherical images/videos. That is, one of the optical elements 604A, 604B and/or one of the image sensors 606A, 606B may be turned off/not operating to capture images/videos. This may allow the image capture device 602 to operate in a lower-power mode (e.g., consuming less processing power, battery) than when operating in the spherical capture mode.

(60) In some implementations, the processing resources (e.g., provided by the processor 610) of the image capture device 602 may be directed to different functions based on whether the image capture device 602 is operating in the spherical capture mode or the non-spherical capture mode. For example, the processor 610 may include a single processing unit that reserves a portion of its processing capabilities for images/videos captured by the image sensor A 606A and another portion of its processing capabilities for images/videos captured by the image sensor B 606B. The processor 610 may include multiple processing units (e.g., multiple chips) with different processor units dedicated to different image sensors 606A, 606B.

(61) When the image capture device 602 is operating in the spherical capture mode, the respective portions of processing capabilities of the processor 610 (e.g., respective portion of processing capabilities, respective chip) may be in operation for spherical capture of visual content. When the image capture device 602 is operating in the non-spherical capture mode, the portion of the processing resource of the processor 610 for the image sensor A 606A may be in operation for capture of reduced visual content (including capture of smaller field of view than spherical visual content) based on the visual information conveyed by output signals of the image sensor A 606A. That is, operation in the non-spherical capture mode may cause the image capture device 602 to operate as a non-spherical image capture device.

(62) In some implementations, the capture of the reduced visual content by the image capture device 602 during the non-spherical capture mode may enable presentation of a stabilized view of the reduced visual content. That is, the reduced visual content may be stabilized for viewing on one or more displays. The stabilized view of the reduced content may have a viewing angle of at least 130-degrees. That is, the viewing window for the stabilized reduced visual content may be sized to include at least 130-degrees (e.g., 135-degrees) of view. Because the reduced visual content is captured based on light received within the optical element A 604A having a field of view greater than or equal to 180-degrees (e.g., 195-degrees), the viewing window for the reduced visual content may have greater freedom to move within the captured content to provide a stabilized view.

(63) When the image capture device 602 is operating in the non-spherical capture mode, the portion of the processing resource of the processor 610 for the image sensor B 606B may not be in operation for capture of reduced visual content based on the visual information conveyed by output signals of the image sensor B 606B. That is, a portion of the processing resource of the processor 610 reserved for image sensor B 606B may not be used to process images/videos captured by the image sensor B 606B. Instead, the portion of the processing resource of the processor 610 reserved for image sensor B 606B may be in operation for the capture of reduced visual content based on the visual information conveyed by output signals of the image sensor A 606A. That is, a portion of the processing resource of the processor 610 reserved for image sensor B 606B may be used to process images/videos captured by the image sensor A 606A. For example, such portion of the processing resource of the processor 610 may be used to perform image classification, image stabilization, and/or other image processing. Such portion of the processing resource of the processor 610 may be used to augment the image processing performed by the other portion of the processing resource (e.g., the portion reserved for the image sensor A 606A) or to perform image processing separate/different from the image processing performed by the other portion of the processing resource.

(64) In some implementations, the image capture device 602 may be configured to switch between the spherical capture mode, the non-spherical capture mode, and/or other modes based on the visual information of one or more of the image sensors 606A, 606B. For example, the visual information of one or more of the image sensors 606A, 606B may be analyzed by the processor 610 to determine whether the image capture device 602 should operate in the spherical capture mode, the non-spherical capture mode, and/or other modes, or to change from one more to another. For instance, an image generated through the image sensor B 606 may be dark/black (e.g., based on the optical element B 604B being covered) and the processor 610 may operate the image capture device 602 in the non-spherical capture mode to capture reduced visual content using the optical element A 604A and the image sensor A 606A. Such setting of operation mode for the image capture device 602 may enable a user to change between spherical capture mode and non- spherical capture mode by simply uncovering/covering one of the optical elements 604A, 604B, rather than by manually setting the operation of the image capture device 602 in a particular mode by interacting with interfaces (e.g., physical/virtual buttons) of the image capture device 602.

(65) Referring back to FIG. 1 , the processor 1 1 may be configured to provide information processing capabilities in the system 10. As such, the processor 1 1 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. The processor 1 1 may be configured to execute one or more machine-readable instructions 100 to facilitate providing rotational motion correction for video content. The machine-readable instructions 100 may include one or more computer program components. The machine-readable instructions 100 may include one or more of a video information component 102, a rotational position information component 104, a rotational position component 106, a rotation component 108, and/or other computer program components. In some implementations, the computer program components may include a warp component 1 10, a presentation component 1 12, and/or other computer program components

(66) The video information component 102 may be configured to obtain video information defining one or more video content (e.g., spherical video content) and/or other information. Obtaining video information may include one or more of accessing, acquiring, analyzing, determining, examining, identifying, loading, locating, opening, receiving, retrieving, reviewing, storing, and/or otherwise obtaining the video information. The video information component 102 may obtain video information from one or more locations. For example, the video information component 102 may obtain video information from a storage location, such as the electronic storage 13, electronic storage of information and/or signals generated by one or more image sensors, electronic storage of a device accessible via a network, and/or other locations. The video information component 102 may obtain video information from one or more hardware components (e.g., an image sensor) and/or one or more software components (e.g., software running on a computing device).

(67) The video information component 102 may be configured to obtain video information defining one or more video content during acquisition of the video content and/or after acquisition of the video content by one or more image sensors. For example, the video information component 102 may obtain video information defining a video while the video is being captured by one or more image sensors. The video information component 102 may obtain video information defining a video after the video has been captured and stored in memory (e.g., the electronic storage 13).

(68) In some implementations, the video information may be obtained based on user interaction with a user interface/application (e.g., video editing application), and/or other information. For example, a user interface/application may provide one or more options for a user to select one or more video content in which viewing directions are to be identified. The video information defining the video content may be obtained based on the user’s selection of the video content through the user interface/video application.

(69) The rotational position information component 104 may be configured to obtain rotational position information and/or other information. Obtaining rotational position information may include one or more of accessing, acquiring, analyzing, determining, examining, identifying, loading, locating, opening, receiving, retrieving, reviewing, storing, and/or otherwise obtaining the rotational position information.

The rotational position information component 104 may obtain rotational position information from one or more locations. For example, the rotational position information component 104 may obtain rotational position information from a storage location, such as the electronic storage 13, electronic storage of information and/or signals generated by one or more sensors, electronic storage of a device accessible via a network, and/or other locations. The rotational position information component 104 may obtain rotational position information from one or more hardware components (e.g., a motion sensor) and/or one or more rotational position components (e.g., software running on a computing device). (70) The rotational position information component 104 may be configured to obtain rotational position information for video content during acquisition of the video content and/or after acquisition of the video content. For example, the rotational position information component 104 may obtain rotational position information for a video while the video is being captured by one or more image sensors. The rotational position information component 104 may obtain rotational position information for video content after the video content has been captured and stored in memory (e.g., the electronic storage 13). For example, the rotational position information may be captured and stored by one or more motion sensors, and may be obtained by the rotational position information component 104 when rotational motion correction for the video content is desired and/or to be performed.

(71) The rotational position information may characterize rotational position of image capture device(s) during capture of video content (e.g., spherical video content). That is, the rotational position information may characterize rotational positions of image capture device(s) during the capture duration(s) of the video content. Rotational positions of an image capture device may refer to how the image capture device is oriented/rotated around one or more axis or one or more point, such as a center point. For example, rotational positions of an image capture device may refer to how the image capture device is rotated about one or more of yaw axis, pitch axis, and/or roll axis while capturing video content. Rotational position information of an image capture device may characterize how the image capture device is rotated (e.g., amount of rotations about the yaw, pitch, and/or roll axis) and/or is being rotated (e.g., speed and/or direction of rotations) at different moments within a capture duration.

(72) In some implementations, the rotational position information (or one or more portions of the rotational position information) may be generated by a set of motion sensors. A set of motion sensors may include one or more sensor and/or other components. For example, referring to FIG. 6, the motion sensor 608 may include a single motion sensor or multiple motion sensors, such as one or more

accelerometers, one or more gyroscopes, and/or other motion sensors. The rotational position information may include information generated by different motion sensors and/or may include information that combines information generated by different motion sensors.

(73) For example, the motion sensor 608 may include an accelerometer, a gyroscope, an inertial measurement unit, and/or other motion sensors. Based on the motion experienced by the motion sensor 608/the image capture device 602 (e.g., the rotational positions of the image capture device 602 during a capture duration), the accelerometer may generate output signals conveying accelerometer information (characterizing acceleration/forces on the image capture device 602), and the gyroscope may generate output signals conveying gyroscope information

(characterizing angular positions/velocities of the image capture device 602). The rotational position information for the image capture device 602 may be determined based on the accelerometer information, the gyroscope information, and/or other information. For example, the rotational position information may include one or more combinations of the accelerometer information and the gyroscope information. Combining the accelerometer information and the gyroscope information may result in more accurate rotational position information. Such rotational position information may benefit from the quick reactivity of the gyroscope and from the long-term reliability of the accelerometer. One or more portions of the accelerometer information may be used to correct a long-term drift in one or more portions of the gyroscope information (e.g., via the gravity component). The rotational position information may provide a track of rotational position/orientation data of the image capture device 602 in a coordinate system, such as an earth frame. The rotational position/orientation data may be cadenced at the same or different frequency as the output frequency of the gyroscope. In some implementations, obtaining the rotational position information may include obtaining the accelerometer information and the gyroscope information, and using the accelerometer information and the gyroscope information to generate the rotational position information.

(74) The rotational position component 106 may be configured to determine the rotational positions of the image capture device(s) during the capture duration(s) based on the rotational position information and/or other information. The rotational position component 106 may analyze the rotational position information to determine how the image capture device(s) were rotationally positioned (e.g., rotated about one or more of yaw axis, pitch axis, and/or roll axis; rotated about one or more points) at different moments during capture of video content. The rotational positions of the image capture device(s) may include a given rotational position of the image capture device(s) for the given moment within the progress length and/or other rotational positions of the image capture device(s) for other moments within the progress length.

(75) For example, FIG. 7 A illustrates example two rotational positions of an image capture device during capture of video content 700. At one moment (point in time, duration of time) within the capture duration, the image capture device may be rotationally positioned so that a point 702 of the image capture device is pointed towards a particular direction (e.g., north direction). The point 702 of the image capture device may refer to a point along a particular angle/direction within the spherical field of view of the image capture device. For example, the point 702 may include a center point, which lies along a center angle/direction within the spherical field of view of the image capture device. Based on the positioning of the image capture device, the front of the video content may include capture of visual content in the north direction. The point 702 of the image capture device may correspond to a center of a default viewing window (e.g. , a viewing window 710) for the video content 700. That is, when the video content 700 is presented on a display, the visual content captured at the point 702 may be presented at the center of the display by default.

(76) During capture of the video content 700, the image capture device may be rotated, such as to the right as shown by image capture device rotation 720. The rotation of the image capture device may be due to an intended motion by a user of the image capture device, such as rotation of the image capture device by a user and/or rotation of an object to which the image capture device is mounted (e.g., the image capture device is mounted on a helmet of the user and the user’s head is moved to look in different directions). The rotation of the image capture device may be due to an untended motion by a user of the image capture device, such as rotation based on the user moving over a rough surface or coming in touch with another object, which causes the image capture device to shake and/or otherwise be moved. (11) Due to the rotation of the image capture device during the capture of the video content 700, the point 702 of the image capture device may be rotated to the right. For example, if the point 702 were pointed towards a north direction, then the image capture device rotation 720 may cause the point 702 to deviate from being pointed towards the north direction (e.g., be pointed towards the east direction). Such rotation of the image capture device may cause the video content 700 to be rotated such that a viewer watching the video content 700 may see the rotation of the video content 700 caused by the image capture device rotation 720. For example, based on the rotation of the image capture device, the front of the video content may change from including capture of visual content in the north direction to including capture of visual content in the east direction. The image capture device rotation 720 may cause the viewing window 710 to be rotated towards the right (e.g., move from being positioned towards the north to being positioned towards the east). For example, if the video content 700 is captured while traveling over a rough surface, the image capture device may shake and cause rotational jitters of the video content 700. A viewer watching the viewer content 700 may see the video content 700 rotationally jittering (e.g., shaking), which may not provide a pleasant or enjoyable viewing experience.

(18) As another example, if the video content 700 is captured by an image capture device mounted on a user’s helmet, movement of the user’s head may change which portions of the video content 700 includes capture of visual content in different directions. A viewer watching the video content 700 may see the video content 700 rotating, which may not be desirable or enjoyable. For instance, the user may have captured the video content 700 while riding a bicycle. A viewer of the video content 700 may wish to see a view of the bicycle ride in front of the bicycle (e.g., front view). However, the user’s head movement during the bicycle ride may cause the front of the video content 700 to include visual content in different directions (e.g., include directions in which the user’s head is directed, rather than the front of the bicycle). A viewer watching the video content 700 may need to manually change the viewing window for the video content 700 to see the view of the bicycle ride in front of the bicycle. (79) The rotation component 108 may be configured to rotate the video content (e.g., spherical video content) based on the rotational positions of the image capture device(s) during the capture duration and/or other information. The rotational component 108 may use how the image capture device(s) were rotationally positioned at different moments during capture of the video content to rotate the video content (e.g., about one or more of yaw axis, pitch axis, and/or roll axis; about one or more points). For example, based on the image capture device having been rotated about the yaw axis during capture of the video content, the rotation component 108 may rotate the video content about the yaw axis. The rotation of the video content by the rotation component 108 may compensate for changes in rotational positions of the image capture device(s) during the capture duration. The rotation of the video content by the rotation component 108 may inverse of the changes in rotational positions of the image capture device(s) during the capture duration. That is, if the rotational positions of the image capture device during the capture duration indicate rotations of the image capture device(s) by certain amounts about the yaw, pitch, and/or roll axes, (e.g., positive yaw amount, positive pitch amount, positive roll amount) then the rotation component 108 may rotate the video content in the opposite direction (e.g., negative yaw amount, negative pitch amount, negative roll amount).

(80) The rotation of the video content by the rotation component 108 may include rotation of one or more video frames (e.g., spherical video frames) of the video content. The video content may be rotated such that a given video frame (e.g., given spherical video frame) is rotated based on the rotational position(s) of the image capture device(s) corresponding to the moment during the capture duration when the given video frame was captured and/or other information. The rotation of the video content may include rotation of one or more of the video frames (e.g., spherical video frames) of the video content to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the video content (e.g., spherical video content). Such rotation of the video content may reorient the video frames according to the measured rotations of the image capture device(s).

(81) For example, based on the image capture device rotation 720 of an image capture device (shown in FIG. 7 A) during capture of the video content 700, the video content 700 may be rotated as shown by video content rotation 730 (shown in FIG. 7B). The video content rotation 720 may reorient one or more video frames of the video content 700 such that the video content 700 is stabilized. The video content rotation 720 may reorient the video content 700 such that the orientation of the video content 700 with respect to the environment of the image capture device remains fixed. For example, even though the image capture device may have been rotated to the right, causing the point 702 of the image capture device to be rotated to the right, the video content 700 may remain fixed so that the front of the video content 700 includes capture of visual content in the north direction. Such stabilization of the video content 700 may allow a viewer of the video content 700 to choose which directions to view from the video content 700 without having the presentation of the video content 700 changing based on rotation of the image capture device. That is, even if the image capture device may have been rotated to the right during capture of the video content 700, presentation of the video content 700 within the viewing window 710 may include presentation of visual content in the north direction.

(82) Rotation of spherical video content by the rotation component 108 may stabilize the entire spherical video content, which may enable more flexible viewing of the video content than stabilization of the video content using cropping.

Stabilization of video content using cropping relies on finding a small (cropped) views within video content that may be used to present a“stabilized” view of the video content. If the video content includes capture of a certain field of view (capture field of view), then the small views within the video content may include a view of a smaller field of view (viewing field of view). Based on the motion of the image capture device during capture of the video content, the cropped view may be moved within the video content to keep the same/similar visual content within the cropped views. Such“stabilized” views of the video content may be limited by a range of motion allowed by the capture field of view and the viewing field of view. There is a tradeoff between the range of motion allowed for view stabilization and the output because they both compete for the same captured pixels. The effectiveness of the “stabilized” view is limited by whether the motion of the image capture device/motion of the cropped views is within the allowed range of motion. For example, if the image capture device is rotated by a large amount during capture of a limited capture field of view, only a small viewing field of view may be used to provide a stabilized view of the video content.

(83) Rather than cropping the video content to stabilize the video content, the rotation component 108 may stabilize the entire spherical video content, which may enable presentation of stabilized views of the spherical video content in any direction. That is, stabilization of the spherical video content may allow a viewer to direct the viewing window in any direction within the stabilized spherical video content. Capture of 360-degrees of a scene removes limitation on range of motion allowed for stabilization. Thus, the spherical video content may be fully stabilized before presentation.

(84) The warp component 1 10 may be configured to warp the video content (e.g., spherical video content) based on the rotational positions of the image capture device(s) during the capture duration and/or other information. The warping of the video content may include warping of one or more of the video frames (e.g., spherical video frames) of the video content to compensate for rolling shutter of the image sensor(s) during the capture duration and to provide rolling shutter correction. Rolling shutter of the image sensor(s) may include pixel lines of the video frames being acquired progressively (e.g., the upper lines of a video frame are not acquired at the same time as the lower lines). If the image capture device is moving during video content capture, a video frame of the video content may include discontinuities between pixel lines due to rolling shutter.

(85) Warping of video content may include manipulation of one or more portions of video frames of the video content. The video content may be warped such that a given video frame (e.g., given spherical video frame) is warped based on the rotational position(s) of the image capture device(s) corresponding to the moment during the capture duration when the given video frame was captured and/or other information. The rotational position(s) of the image device(s) may be used to determine how much/quickly the image capture device(s) moved during video content capture and to determine in what direction and/or by what amount different portions (e.g., pixel lines) of the video frames should be warped.

(86) In some implementations, the video content may be warped further based on acquisition information for the video content and/or other information. The acquisition information may characterize one or more exposure times of the image capture device(s) used to capture the video content. One or more video frames of the video content may be warped according to the rotation(s) of the image capture device(s) during the frame acquisition (exposure) to perform rolling shutter correction. For example, the center of a video frame may be considered as the zero reference, and other parts of the video frame may be smoothly warped considering the rotation of the image capture device(s) and the exposure time during the video frame acquisition.

(87) The presentation component 1 12 may be configured to effectuate

presentation of the stabilized video content (e.g., stabilized spherical video content) on one or more displays. The presentation component may effectuate presentation of the stabilized video content on the display(s) based on one or more viewing directions and/or other information. A viewing direction may define a direction of view for the stabilized video content (e.g., from the point of view of the stabilized spherical video content) as the function of progress through the progress length of the stabilized video content. Because the entire spherical video content is stabilized, the viewing window for the stabilized spherical video content may be directed in any direction within the sphere. For example, based on the viewing direction(s) (such as shown in FIG. 4), a given visual portion/extent of the stabilized video content may be presented on the display(s). Such presentation of the stabilized video content may provide for a punch-out view of the stabilized video content.

(88) The presentation component 1 12 may be configured to effectuate

simultaneous presentation of the stabilized video content (e.g., stabilized spherical video content) to multiple users. Simultaneous presentation of the stabilized video content may include presentation of the same video content to multiple users at the same time. The presentation component 1 12 may obtain multiple viewing directions for presentation of the stabilized video content to multiple users. For example, the presentation component 1 12 may obtain a viewing direction associated with one user and another viewing direction is associated with another user, where the viewing directions for the different users are different. Stabilizing entire spherical video content for view may enable different users to simultaneously view different portions of the spherical video content without separate stabilization of views for different users. That is, rather than separately stabilizing views for different users using cropping, the entire spherical video content may be stabilized to allow for different portions of the stabled video content to be provided to different users.

(89) FIG. 8 illustrates example viewing windows 810, 820 for different users. For example, the viewing window A 810 may be associated with one user (e.g., selected by/for the user) and the viewing window B 820 may be associated with another user (e.g., selected by/for the other user). The viewing window A 810 may have a viewing direction different from the viewing window B 820. That is, the viewing window A 810 may be directed towards a portion in the front of the stabilized video content 800 while the viewing window B 820 may be directed towards a portion in the back of the stabilized video content 800. The simultaneous presentation of the stabilized video content 800 may include presentation of the stabilized video content 800 within the viewing window A 810 to one of the users based on the viewing direction of the viewing window A 810 and/or other information, and presentation of the stabilized video content 800 within the viewing window B 820 to the other user based on the viewing direction of the viewing window B 820 and/or other information.

(90) In some implementations, the presentation of the stabilized video content 800 to multiple users may be synchronized such that particular moments in the progress length of the stabilized video content 800 are presented to the multiple users at the same time. That is, while the users are being presented with different spatial portions of the stabilized video content 800 based on different viewing

directions/viewing windows, the spatial portions that are presented may correspond to the same moment within the progress length of the stabilized video content 800. The presentation of the stabilized video content 800 corresponding to a particular moment to the multiple users may include presentations of different portions of the visual content of the stabilized video content 800 (e.g., different visual content viewable from the point of view of the stabilized spherical video content 800) based on a difference between the viewing directions.

(91) In some implementations, the image capture device(s) that capture video content may be carried by a vehicle during the capture duration. A vehicle may refer to a device or a machine that may move. A vehicle may be used to move one or more objects (e.g., image capture device). For example, a vehicle may be powered by human, animal, fuel, battery, and/or other power sources. A vehicle may include one or more of a land vehicle (e.g., car, truck, bicycle, motorcycle), an air vehicle (e.g., plane, drone, helicopter, glider), a water vehicle (e.g., boat, submarine, surfboard), and/or a vehicle that may move within multiple media (e.g., water-land vehicle).

(92) Simultaneous presentation of the stabilized video content may be effectuated during the capture duration of the video content. FIG. 9 illustrates a stabilized video content 900 that is captured by image capture device(s) of a vehicle 930. The stabilized video content 900 may include 360-degree capture of a scene around the vehicle 930. The simultaneous presentation of the stabilization video content 900 may include different users being presented with different spatial portions of the stabilized video content 900. For example, the simultaneous presentation of the stabilized video content 900 may include presentation of the stabilized video content 900 within a viewing window A 910 to one of the users based on the viewing direction of the viewing window A 910 and/or other information, and presentation of the stabilized video content 900 within the viewing window B 920 to another user based on the viewing direction of the viewing window B 920 and/or other information.

(93) In some implementation, one user may have control over the motion of the vehicle 930, and another user may not have control over the motion of the vehicle 930. For example, the user using the viewing window A 910 may be controlling the vehicle (e.g., riding in the vehicle 930, controlling the vehicle 930 from a remote locate), and may use the viewing window A 910 to see where the vehicle 930 is going. The user using the viewing window B 920 may be allowed to see different portions of the stabilized video content 900 while not affecting the control of the vehicle 930 by the other user. Such views of the stabilized video content 900 may be referred to as companion views. Companion views of the stabilized video content 900 may be provided at little cost (e.g., computing time/power) since the video content has already been stabilized and separate stabilization for the companion view is not needed.

(94) While the description herein may be directed to video content, one or more other implementations of the system/method described herein may be configured for other types media content. Other types of media content may include one or more of audio content (e.g., music, podcasts, audiobooks, and/or other audio content), multimedia presentations, images, slideshows, visual content (one or more images and/or videos), and/or other media content.

(95) Implementations of the disclosure may be made in hardware, firmware, software, or any suitable combination thereof. Aspects of the disclosure may be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a tangible computer-readable storage medium may include read-only memory, random access memory, magnetic disk storage media, optical storage media, flash memory devices, and others, and a machine-readable transmission media may include forms of propagated signals, such as carrier waves, infrared signals, digital signals, and others. Firmware, software, routines, or instructions may be described herein in terms of specific exemplary aspects and implementations of the disclosure, and performing certain actions.

(96) In some implementations, some or all of the functionalities attributed herein to the system 10 may be provided by external resources not included in the system 10. External resources may include hosts/sources of information, computing, and/or processing and/or other providers of information, computing, and/or processing outside of the system 10.

(97) Although the processor 1 1 and the electronic storage 13 are shown to be connected to the interface 12 in FIG. 1 , any communication medium may be used to facilitate interaction between any components of the system 10. One or more components of the system 10 may communicate with each other through hard-wired communication, wireless communication, or both. For example, one or more components of the system 10 may communicate with each other through a network. For example, the processor 1 1 may wirelessly communicate with the electronic storage 13. By way of non-limiting example, wireless communication may include one or more of radio communication, Bluetooth communication, Wi-Fi

communication, cellular communication, infrared communication, or other wireless communication. Other types of communications are contemplated by the present disclosure.

(98) Although the processor 1 1 is shown in FIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, the processor 1 1 may comprise a plurality of processing units. These processing units may be physically located within the same device, or the processor 1 1 may represent processing functionality of a plurality of devices operating in coordination. The processor 1 1 may be configured to execute one or more components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on the processor 1 1.

(99) It should be appreciated that although computer components are illustrated in FIG. 1 as being co-located within a single processing unit, in implementations in which processor 1 1 comprises multiple processing units, one or more of computer program components may be located remotely from the other computer program components.

(100) While computer program components are described herein as being implemented via processor 1 1 through machine-readable instructions 100, this is merely for ease of reference and is not meant to be limiting. In some

implementations, one or more functions of computer program components described herein may be implemented via hardware (e.g., dedicated chip, field-programmable gate array) rather than software. One or more functions of computer program components described herein may be software-implemented, hardware- implemented, or software and hardware-implemented

(101) The description of the functionality provided by the different computer program components described herein is for illustrative purposes, and is not intended to be limiting, as any of computer program components may provide more or less functionality than is described. For example, one or more of computer program components may be eliminated, and some or all of its functionality may be provided by other computer program components. As another example, processor 11 may be configured to execute one or more additional computer program components that may perform some or all of the functionality attributed to one or more of computer program components described herein. (102) The electronic storage media of the electronic storage 13 may be provided integrally (i.e., substantially non-removable) with one or more components of the system 10 and/or removable storage that is connectable to one or more components of the system 10 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storage 13 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. The electronic storage 13 may be a separate component within the system 10, or the electronic storage 13 may be provided integrally with one or more other components of the system 10 (e.g., the processor 1 1 ). Although the electronic storage 13 is shown in FIG. 1 as a single entity, this is for illustrative purposes only.

In some implementations, the electronic storage 13 may comprise a plurality of storage units. These storage units may be physically located within the same device, or the electronic storage 13 may represent storage functionality of a plurality of devices operating in coordination.

(103) FIG. 2 illustrates method 200 for providing rotational motion correction for video content. The operations of method 200 presented below are intended to be illustrative. In some implementations, method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. In some implementations, two or more of the operations may occur substantially simultaneously.

(104) In some implementations, method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operation of method 200 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operation of method 200.

(105) Referring to FIG. 2 and method 200, at operation 201 , video information defining spherical video content may be obtained. The spherical video content may have a progress length. The spherical video content may include spherical video frames that define visual content viewable from a point of view as a function of progress through the progress length. The spherical video frames may correspond to moments within the progress length. The spherical video frames may include a given spherical video frame corresponding to a given moment within the progress length. The spherical video content may be captured by one or more image capture devices during a capture duration. In some implementation, operation 201 may be performed by a processor component the same as or similar to the video information component 102 (Shown in FIG. 1 and described herein).

(106) At operation 202, rotational position information of the image capture device(s) may be obtained. The rotational position information may characterize rotational positions of the image capture device(s) during the capture duration. In some implementations, operation 202 may be performed by a processor component the same as or similar to the rotational position information component 104 (Shown in FIG. 1 and described herein).

(107) At operation 203, the rotational positions of the image capture device(s) during the capture duration may be determined based on the rotational position information. The rotational positions of the image capture device(s) may include a given rotational position of the image capture device(s) for the given moment within the progress length. In some implementations, operation 203 may be performed by a processor component the same as or similar to the rotational position component 106 (Shown in FIG. 1 and described herein).

(108) At operation 204, the spherical video content may be rotated based on the rotational positions of the image capture device(s) during the capture duration such that the given spherical video frame is rotated based on the given rotational position of the image capture device(s). The rotation of the spherical video content may include rotation of one or more of the spherical video frames to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the spherical video content. In some implementations, operation 204 may be performed by a processor component the same as or similar to the rotation component 108 (Shown in FIG. 1 and described herein).

(109) Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any

implementation can be combined with one or more features of any other

implementation.