Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VIRTUAL THEATER
Document Type and Number:
WIPO Patent Application WO/2000/060857
Kind Code:
A1
Abstract:
A method and apparatus for projecting perspectively corrected spherical video images in a theater environment is disclosed. First, video signals and associated audio signals are stored (14). The direction of view of a user is then tracked (60, 62) to create sufficient feedback information which is then provided to a processor (50). The video signals and audio signals are then provided to the processor (50). The processor then orients the video signal based on the position feedback information and then perspectively corrects the video signal. The audio signal is oriented based on the orientation of the perspectively corrected video signal. The perspectively corrected video signal and the oriented audio signals are then provided to a projection system (64), wherein the perspectively corrected video signals are projected on a screen and said oriented audio signals create an impression of a three-dimensional sound field.

Inventors:
BAUER MARTIN L (US)
COLE BRUCE (US)
EVANS KIMBERLY S (US)
GRANTHAM CRAIG (US)
JACKSON P LABAN (US)
KING CHRISTOPHER M (US)
KITZMILLER SEAN (US)
KUBAN DANIEL P (US)
MARTIN H LEE (US)
TOURVILLE MICHAEL J (US)
ZIMMERMANN STEVEN D (US)
HATMAKER JAMES L (US)
MCGINNIS SEAN W (US)
GOURLEY CHRISTOPHER SHANNON (US)
Application Number:
PCT/US2000/009462
Publication Date:
October 12, 2000
Filing Date:
April 10, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERNET PICTURES CORP (US)
BAUER MARTIN L (US)
COLE BRUCE (US)
EVANS KIMBERLY S (US)
GRANTHAM CRAIG (US)
JACKSON P LABAN (US)
KING CHRISTOPHER M (US)
KITZMILLER SEAN (US)
KUBAN DANIEL P (US)
MARTIN H LEE (US)
TOURVILLE MICHAEL J (US)
ZIMMERMANN STEVEN D (US)
HATMAKER JAMES L (US)
MCGINNIS SEAN W (US)
GOURLEY CHRISTOPHER SHANNON (US)
International Classes:
G02B13/06; G06T15/20; H04N5/225; H04N5/262; H04N5/272; H04N7/173; H04N7/18; H04N21/218; H04N21/2543; H04N21/4223; H04N21/472; H04N21/6587; H04N7/16; (IPC1-7): H04N5/74
Domestic Patent References:
WO1994016406A11994-07-21
WO1997001241A11997-01-09
WO1996013962A11996-05-09
Foreign References:
US5130794A1992-07-14
GB2289820A1995-11-29
US5657073A1997-08-12
US5703604A1997-12-30
US4868682A1989-09-19
EP0522204A11993-01-13
Attorney, Agent or Firm:
Glembocki, Christopher R. (Ltd. 11th floor 1001 G Street N.W. Washington, DC, US)
Download PDF:
Claims:
We Claim:
1. An image displaying system, comprising: a projector for projecting composite video images on a screen, wherein at least a portion of said composite video images are perspectively corrected spherical video images; and a speaker system for creating an impression of a three dimensional sound field which corresponds to the orientation of said perspectively corrected spherical video images.
2. An image projection system, comprising: a processor; a memory for storing video images and audio signals from a camera system wherein said video images correspond to at least two partial spherical video images, and said video images and audio signals are supplied to said processor; means for tracking direction of view of a user provide position feedback information to said processor; said processor comprising means for converting said video signals into seamless spherical video images, and means for perspectively correcting at least a portion of said spherical video images based on said feedback information, wherein said feedback information provides an indication of a direction of view of the user; means for projecting said perspectively corrected video signals received from said processor on a screen; and a speaker system for creating an impression of a threedimensional sound field which corresponds to an orientation of said perspectively corrected spherical video images.
3. The image projection system according to claim 2, wherein said screen is a flat screen.
4. The image projection system according to claim 2, wherein said screen is a compound curve torus screen.
5. The image projection system according to claim 2, wherein said screen is a hemispherical dome.
6. The image projection system according to claim 2, wherein said screen is a cylindrical screen.
7. The image projection system according to claim 2, wherein said screen has multiple screens.
8. The image projection system according to claim 7, wherein said multiple screens are compound curved screens.
9. The image projection system according to claim 7, wherein said multiple screens are cylindrical screens.
10. The image projection system according to claim 2, wherein said screen is a cube cave display.
11. The image projection system according to claim 2, wherein said screen is a spherical dome.
12. The image projection system according to claim 2, wherein said screen is a polygon cave.
13. The image projection system according to claim 2, wherein said perspectively corrected spherical video signals are displayed on a virtual simulator.
14. The image projection system according to claim 2, wherein head tracker goggles are used to track the direction of view of the user.
15. The image projection system according to claim 2, wherein seat motion control is used to track the direction of view of the user.
16. The image projection system according to claim 2, further comprising: input control means for changing the orientation of said perspectively corrected spherical video images projected on said screen.
17. The image projection system according to claim 16, wherein said input control means is a mouse.
18. The image projection system according to claim 16, wherein said input control means is a joystick.
19. The image projection system according to claim 16, wherein said input control means is a remote control.
20. The image projection system according to claim 16, wherein said input control means is a computer control.
21. The image projection system according to claim 2, wherein said projection means is a single projection system.
22. The image projection system according to claim 2, wherein said projection system is a multiple projection system.
23. The image projection system according to claim 2, wherein said projection means has front and rear projectors.
24. The image projection system according to claim 2, wherein sound levels of objects increases when said objects are in the direction of view.
25. The image projection system according to claim 2, wherein said video signals are live video signals.
26. A method for projecting perspectively corrected video images in a theater environment, comprising the steps of : storing video images and associated audio signals, wherein said video images and audio signals are gathered by a camera system wherein said video images correspond to at least two partial spherical video images; tracking a direction of view of a user to create position feedback information; providing said position feedback information to a processor, wherein said position feedback information provides an indication of a portion of said video images that the view is interested in viewing; providing said video images and audio signals to said processor; converting said video signals to seamless spherical video images; perspectively correcting a portion of said spherical video images based on said position feedback information; orienting said audio signals based on an orientation of said perspectively corrected video images; and providing said perspectively corrected video images and said oriented audio signals to a projection system, wherein said perspectively corrected video images are projected on a screen and said oriented audio signals are broadcast to create an impression of a threedimensional sound field.
27. The method for projecting perspectively corrected video images in a theater environment according to claim 26, further comprising the step of : continuously monitoring said position feedback information and reorienting said perspectively corrected video images and said audio signals as said position feedback information changes.
Description:
VIRTUAL THEATER Related References This application claims the benefit of U. S. Provisional Application No. 60/128,613, filed on April 8,1999, which is hereby entirely incorporated herein by reference. In addition, the following disclosures are filed herewith and are expressly incorporated by reference for any essential material: 1. U. S. Patent Application Serial No., (Attorney Docket No. 01096.86946) entitled "Remote Platform for Camera" ; 2. U. S. Patent Application Serial No., (Attorney Docket No. 01096.86949) entitled "Maethod and Apparatus for providing Virtual Processing Effects for Wide-Angle Video Images" ; 3. U. S. Patent Application Serial No., (Attorney Docket No. 01096.84594) entitled "Immersive Video Presentations".

Field of the Invention The present invention relates to a method and apparatus for displaying perspectively corrected spherical video images, and more particularly to displaying the perspectively corrected spherical video images in an orientation specified by a viewer wherein the audio signal broadcast along with the video signal creates an impression of a three-dimensional sound field which is oriented in the same orientation as the video signals.

Background of the Invention The fundamental apparatus, algorithm and method for achieving perspectively corrected views of any selected portion of a hemispherical (or other wide-angle) field of view are described in detail in U. S. Patent No. 5,185,667. This patent along with U. S. Patent Nos. 5,359,363, 5,384,588,5,990,941, and 6,002,430 are incorporated herein by reference for their teachings.

Through the use of this technology, no moving parts are required for achieving pan, tilt and rotation "motions", as well as magnification. Briefly, a wide-angled field of view image is captured into an electronic memory buffer. A selected portion of the captured image containing a region of interest is transformed into a perspective corrected image by an image processing center. This provides direct mapping of the wide-angle image region of interest into a corrected image using an orthogonal set of transforming algorithms. The viewing orientation and other viewing parameters are designated by a command signal generated by either a human operator or a form of computerized input.

Transformed image is deposited in a second electronic buffer where it is then manipulated to produce the output image as requested by the command signal.

Various spherical and panoramic projection systems are known in the art. For example, various spherical and panoramic display systems are disclosed in U. S. Patent No. 4,656,506, U. S.

Patent No. 5,130,792 and U. S. Patent No. 5,495,576 to Curtis J. Ritchey. The images displayed in these projection systems are gathered from a plurality of cameras. However, these systems do not disclose perspectively correcting the spherical video images as taught in U. S. Patent No. 5,185,687.

Furthermore, the projection or display systems disclosed in the Ritchey patents do not disclose a sound system in which a three-dimensional sound field is created based upon the images being projected.

Systems for creating three-dimensional sound fields are known in the art. Example, Aureal Semiconductor, Inc., has software available for creating three-dimensional sound fields. Some of the aspects involved with creating three-dimensional sound fields are disclosed in U. S. Patent No. 5,596,644, U. S. Patent No. 5,729,612, U. S. Patent No. 5,802,180, and U. S. Patent No. 6,009,178 all of which are assigned to Aureal Semiconductor, Inc. These patents are specifically incorporated herein by reference. However, these systems have not been used in conjunction with spherical or panoramic projection systems to create a unique environment in which perspectively corrected spherical video images are displayed on a screen and three-dimensional sound fields corresponding to the orientation of the perspectively corrected images are created within the environment.

Summary of the Invention It is an object of the present invention to provide a theater environment in which the view perspectively corrected spherical video images wherein the audio signals associated with the video signals create a three-dimensional sound field wherein the sound field is oriented based upon the orientation of the perspectively corrected spherical video images.

According to one embodiment of the present invention, a method for projecting perspectively corrected spherical video images in a theater environment is disclosed. First, video signals and associated audio signals are stored. The direction of view of a user is then tracked to create sufficient feedback information which is then provided to a processor. The video signals and audio signals are then provided to the processor. The processor then orients the video signal based on the position feedback information and then perspectively corrects the video signal. The audio signal is oriented based on the orientation of the perspectively corrected video signal. The perspectively corrected video signal and the oriented audio signals are then provided to a projection system, wherein the perspectively corrected video signals are projected on a screen and said oriented audio signals create an impression of a three-dimensional sound field.

According to a second embodiment of the invention, the video and audio signals are projected to a user through a head-mounted display, a projection system enveloping at least a portion of the user's head, or goggles. Audio portions are also projected in each situation.

Brief Description of the Drawings The above mentioned features of the invention will become more clearly understood from the foregoing detailed description of the invention read together with the drawings, in which: Figures 1A and 1B show schematic block diagram of the signal processing portion of the present invention illustrating the major components thereof. Figure 1A shows the perspective correction process implemented in hardware. Figure 1B shows a perspective correction process implemented in software, operating inside in a personal computer.

Figure 2 is a schematic diagram of an image projection system according to one embodiment of the invention.

Figure 3 is a flow chart illustrating a method for projecting perspectively corrected spherical video images in a theater environment according to one embodiment of the invention.

Figures 4A-4G illustrate various screens on which the perspectively corrected spherical video images can be projected according to different embodiments of the invention.

Figure 5 relates to different sound sources and distances to the sources based on the level of zoom in accordance with embodiments of the present invention.

Detailed Description The invention relates to a display apparatus for projecting a stream of video images on a screen. As will be explained in detail below, spherical video images are created which allow a user to look in any direction while watching a video presentation. The direction of view of the user is determined and the spherical video images in the general direction of the user's selection are perspectively corrected in such a manner so that the user is looking at perspectively corrected video images even as the user's selected direction of view changes. The display apparatus also comprises a sound system which generates audio signals in such a manner to create an impression of a three- dimensional sound field that is oriented to correspond to at least a portion of the projected video signals.

The principles of the optical transform utilized in the present invention can be understood by reference to the system 10 of Figures 1A and 1B. (This is also set forth in the aforecited U. S.

Patent No. 5,185,667 that is incorporated herein by reference.) Referring to Figure 1A, shown schematically at 11 is a wide angle, e. g., a hemispherical, lens that provides an image of the environment with a 180 degree or greater field-of-view. The lens is attached to a camera 12 which converts the optical image into an electrical signal. These signals are then digitized electronically 13 and stored in an image buffer 14 within the present invention. An image processing system consisting of an X-MAP and a Y-MAP processor shown as 16 and 17, respectively, performs the two-dimensional transform mapping. The image transform processors are controlled by the microcomputer and control interface 15. The microcomputer control interface provides initialization and transform parameter calculation for the system. The control interface also determines the desired transformation coefficients based on orientation angle, magnification, rotation, and light sensitivity input from an input means such as a joystick controller 22, computer input means 23 or some other input device. The transformed image is filtered by a 2-dimensional convolution filter 18 and the output of the filtered image is stored in an output image buffer 19. The output image buffer 19 is scanned out by display electronics 20 to a video display device 21 for viewing.

A range of lens types can be accommodated to support various fields of view. The lens optics 11 may correspond with the mathematical coefficients used with the X-MAP and Y-MAP processors 16 and 17 to transform the image. The capability to pan and tilt the output image or images remains even though a different maximum field-of-view is provided with a different lens element.

The invention can be realized by proper combination of a number of optical and electronic devices as described in the 5,185,667 patent whose disclosure is expressly incorporated herein.

Figure 1B contains elements similar to that of Figure 1A but is implemented in a personal computer represented by dashed line D. The personal computer includes central processing unit 15'performing the perspective correction algorithms X-MAP 16'and Y-MAP 17'as stored in RAM, ROM, or some other form. The display driver 20 outputs the perspective corrected image to computer display monitor 21'.

The transformation portion of the invention as described has the capability to pan and tilt the output image through the entire field-of-view of the lens element by changing the input means, e. g. a joystick, computer, mouse, head tracker goggles, a chair with motion control, etc., to the controller.

The image can also be rotated through any portion of 360 degrees on its axis changing the perceived vertical of the displayed image. This capability provides an ability to align the vertical image with the gravity vector to maintain a proper perspective in the image display regardless of the pan or tilt angle of the image. The invention also supports modifications in the magnification used to display the output image. This is commensurate with a zoom function that allows a change in the field-of- view of the output image. The magnitude of zoom provided is a function of the resolution of the input camera, the resolution of the output display, the clarity of the output display, and the amount of picture element (pixel) averaging that is used in a given display.

According to one embodiment of the present invention, the above-described imaging system can be used to provide spherical video images in a theater environment, as illustrated in Figure 2.

The video images and audio signals are captured by the camera system (not illustrated) and then stored in the input image buffer 14. A computer 50, such as the personal computer D illustrated in Figure 1B is connected to the input image buffer 14 for receiving a stream of video images and audio signals from the input image buffer 14.

The computer 50 is also connected to at least one of a plurality of input control devices such as a mouse 52, a joystick 54, a remote control 56, a computer control 58, etc. The computer can also be connected to a pair of goggles 60 that track the movement of a user's head and/or a chair 62 with motion control. Example sensing systems include the use of gyroscopes, accelerometers, optical sensors, radio frequency sensors, and the like which are well known in the art. Each of these input control devices provides position feedback information to the computer 50. The position feedback information provides an indication of the portion of the whole image that the user is interested in or desires to view. In addition, the input control devices can also cause the system to zoom in or out of a displayed image.

It is appreciated that a variety of different image formats may be used to store the spherical image. Formats that may be used include equirectangular, Mercator, side-by-side conical, single conical, bi-hemispherical, single hemispherical (and as disclosed in U. S. Patent Nos. 5,684,937, 5,903,782, and 5,936,630 to Oxaal), cubic, and other known image projection mappings as are known in the art. Additional file formats are disclosed in U. S. Patent Application Serial No.

(01096.84594) filed herewith and whose contents are expressly incorporated herein by reference.

The computer 50 is also connected to a projection system 64. The projection system 64 can comprise a single or multiple projection systems, front and/or rear projection, flat panel, monitor, display goggles, head mounted display, head-enveloping viewing systems (e. g. flogiston), etc. The projection system 64 also comprises a sound system for creating an impression of a three- dimensional sound field which coincides with the projection of the video images. In the alternative, the computer 50 may be connected to a separate sound system which creates the three-dimensional sound field.

A variety of display systems can be used as illustrated in Figures 8A-8G. For example, the display system can be selected from a computer CRT, television, projection display, high definition television, head mounted display, compound curve torus screen, hemispherical dome, spherical dome, cylindrical screen projection, multi-screen compound curve projection system, cube cave display, polygon cave display, or a virtual reality display system. The equations for transforming images between representations are known in the art and are not treated here.

The operation of an imaging system according to one embodiment of the invention will now be described with reference to Figure 4. First, the video images and audio signals are captured in step 701 by a camera system like the one described above. The video images and audio signals may depict live and continuous action being captured by the camera system. The video images and audio signals are then sent to the input image buffer 14 in step 703 by a communications link 51. The communications link 51 can be, for example, a cable, phone line, wireless communication link or any other device capable of transmitting signals from the camera system to the input image buffer 14.

The user may indicate a desired direction of view using a variety of input controls to create position feedback information relating to the direction of the desired view in step 705. For example, the user may be wearing goggles which monitor changes in the position of the user's head as a means for creating the position feedback information. This system may use accelerometers to detect head movement. Likewise, the user may be sitting in a chair which is capable of changing positions and thus changing the direction of view of the user. This system mat use optical sensors in the chair to detect movement. The user can also be using a mouse, joystick, remote control, computer control, etc., as a means for indicating the user's desired direction of view. Alternatively, feedback information may originate from another user with the current user acting as a spectator. The ability to control another's direction of view enables business applications such as providing tours of environments. Further, the feedback information may be stored and played back in place of a current user's feedback information. Previously stored position information is described in U. S. Patent No.

5,764,276 which is expressly incorporated herein by reference.

The video images and audio signals may be sequentially sent to the computer 50 in a stream in step 707. In addition, the position feedback information is also sent to the computer 50 as it becomes available. The computer first converts the video signals into seamless spherical video images. The process for seaming images is disclosed in detail in U. S Patent Application Serial No.

(Attorney Docket No. 01096.86949) entitled"Method and Apparatus for Providing Virtual Processing Effects For Wide-Angle Video Images"and U. S. Patent Application Serial No.

(Attorney Docket No. 01096.84594) entitled"Immersive Video Presentations"both of which are expressly incorporated herein by reference. At least a portion of the spherical video images are perspectively corrected based upon the position feedback information in, for example, the manner described above with respect to Figures lA-1B. For example, only the spherical video images in the general vicinity of the users view need to be perspectively corrected. The position feedback information is used by the computer 50 to determine the user's direction of view or desired direction of view. Once the direction of view has been determined, the computer 50 perspectively corrects the images in the vicinity of the user's direction of view. Alternatively, the user may specify multiple viewing directions. The process for displaying spherical video is discussed in U. S. Patent Application Serial No. (Attorney Docket No. 01096.84594) entitled"Method and Apparatus for providing Virtual Processing Effects for Wide-Angle Video Images"which is incorporated herein by reference.

Where a user specifies multiple directions of view, the resulting images may be displayed as two separate windows. Alternatively, they may be displayed as a split screen with one screen reflecting a first direction of view and subsequent screens reflecting the other directions of view (e. g.

DOV2, DOV3, DOV4,...).

Once the perspectively corrected video signals have been created, the computer 50 then properly orients the audio signals to create the impression of a three-dimensional sound field which corresponds with the selected layout of the 360 degree video image in step 713. For example, if a waterfall is directly in the field of view of the user, then the sound signals must be oriented in such a manner that the sound of the waterfall seems to be coming from in front of the viewer, i. e., from where the waterfall is located relative to the direction of the user's view. In addition, if the user is zooming in on the waterfall, the computer needs to increase the sound of the waterfall. Likewise, the sound of the waterfall needs to decrease as the waterfall moves further away. Furthermore, as objects pass by on either the right or left of the direction of view, any sounds the object is making needs to seem like it is moving by on either the right or left of the user's direction of view. In addition, the sound levels of objects at different distances from where the user is"located"need to appear realistic. The processing of the sound in the computer 50, according to one embodiment of the invention is performed by software developed by Aureal Semiconductor Inc. which incorporates at least some of the information disclosed in U. S. Patent Nos. 5,596,644,5,729,612,5,802,180, and 6,009,178 all of which have been incorporated herein by reference. It will be understood that other software can be used to create the impression of a three-dimensional sound field and the invention is not limited to the software developed by Aureal Semiconductor Inc.

For example, spatialization of sound fields can be accomplished by filtering audio signals using filters having unvarying frequency response characteristics and amplifying signals using amplifier gains adapted in response to signals representing sound source location and/or listener position. The filters are derived using a singular value decomposition process which finds the best set of component impulse responses to approximate a given target set of impulse responses corresponding to head related transfer functions. One method for providing an acoustic display disclosed in U. S. Patent No. 5,802,180 comprises the following steps. First, audio signals and direct signals representing one or more sources of aural information are received. One or more ambient signals representing ambient effects are also received. First signals are then generated in response to the audio signals. A plurality of filtered signals are generated by filtering the first signals with filters having respective unvarying impulse responses which are substantially mutually orthogonal.

One or more output signals are generated in response to the filtered signals. For a display receiving a plurality of audio signals, a respective first signal is generated by combining the audio signals according to a respective set of weights adapted in response to the direction signals and the ambient signals. For a display generating a plurality of output signals, a respective output signal is generated by combining the filtered signals according to a respective set of weights adapted in response to the direction signals and the ambient signals. A method for providing an acoustic display disclosed in U. S. Patent No. 5,596,644 comprises the following steps. An audio signal representing an acoustic source is generated. Location signals representing the apparent location of the source are then generated. Two or more filters are then applied to the audio signals. A plurality of output signals are generated by amplifying the output of each filter using amplifier gains adapted in response to the location signal and combining the amplified signals.

Once the audio signals have been oriented to correspond to the orientation of the perspectively corrected video images, the computer outputs the perspectively corrected video images and audio signals to the projection system. The projection system then displays the perspectively corrected video images on a display screen and the audio signals are broadcast so as to create the impression of a three-dimensional sound field in step 715.

An alternative representation of the sound information is shown in Figure 5. Spherical image 901 is represented as it would appear to a user at location 902. As a direction of view is where a user is presently looking, sound sources A and B need to be coordinated to reflect the current field of view. For simplicity, the radius of sphere 901 is set to r. As one zooms in to a noise source, the noise source should appear louder and other sources softer. Figure 5A shows direction of view at 903. This direction of View (DOV) has a low zoom (or magnification) factor. Here, the noise sources A and B have approximately the same normalized volume as the audible distance from 903 to A and 903 to B are about the same. As a user zooms in on the perspectively corrected image, the DOV remains the same (906), but the zoom factor increases. The distance to points A and B decreases and therefore the sound from A and B needs to increase. However, as the DOV 906 is closer to A than B, a weighting factor applied to A reflects the virtual location of the viewer getting closer to A than. As perceived sound volume is inversely proportional to the distance from the noise source, the equation for sound from point A is used: Y(A> = d 2A) Using a radius of 1 foe sphere 901, the radius of a sphere at a first zoom (903) may be. 2 for example. The radius at the second zoom (904) may be. 4. The radius at the third zoom (905) may be. 6. As the viewer zooms in, the sound from point A increases over the sound from B. The change in volume of A may be represented by the change in weighting factor for A. F VA(904) = A VA (903) (distance from A to A904) 2 A (distance from A to 903) 2 vA (904) _ (distance fro-m A to.903) 2 VA (903) (distance from A to 904) 2 Thus, as a user zooms in, the sound level emanating from a source close to the DOV increases and the sound emanating from a source opposite the DOV decreases. For two sources close to one another, the sound increases for both of them. This may be modified to weight more highly the sound sources directly within the DOV. Figures 5B and 5C show similar triangles that may be used to determine the ratio of sound from A/B The system and method and viewing environment described herein may also be applied to non-spherical video images as well. For example, systems exist (e. g. QTVR by Apple) that primarily concentrate on cylindrical images, where the top an bottom portions of an environment are removed or not captured. The designation of sound locations within a spherical image is similar to that described above. For example, a sound source may be located within the field of view of a user. Also, the sound source may be located in a position not viewable by a user. For example, the sound source may be located directly overhead. In this example where one is limited to only a panoramic view eliminating the top and bottom of the image (e. g., top 10 degrees and bottom 10 degrees) zooming around the environment would still produce an effect similar to that as shown in Figures 5A-5C. Accordingly, it is therefore appreciated that the above described aureal technique may be applied to images containing a spherical data set as well as images containing less than a spherical data set.

The following disclosures are filed herewith and are expressly incorporated by reference for any essential material: 1. U. S. Patent Application Serial No., (Attorney Docket No. 01096.86946) entitled "Remote Platform for Camera"; 2. U. S Patent Application Serial No. (Attorney Docket No. 01096.86949) entitled "Method and Apparatus for Providing Virtual Processing Effects For Wide-Angle Video Images" ; and 3. U. S. Patent Application Serial No. (Attorney Docket No. 01096.84594) entitled "Immersive Video Presentations".

While a preferred embodiment has been shown and described, it will be understood that it is not intended to limit the disclosure, but rather it is intended to cover all modifications and alternate methods falling within the spirit and the scope of the invention as defined in the appended claims.

All of the above referenced U. S. patents and pending applications referenced herein are expressly incorporated by reference.