Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FLOATING POINT TO INTEGER CONVERSION FOR 360-DEGREE VIDEO PROJECTION FORMAT CONVERSION AND SPHERICAL METRICS CALCULATION
Document Type and Number:
WIPO Patent Application WO/2018/170416
Kind Code:
A1
Abstract:
A system, method, and/or instrumentality may convert content of a first projection format to content of a second projection format. A sample position associated with the content of the first projection format may be identified and/or represented as a floating point value. A scaling factor for converting the floating point value to a fixed point value may be identified. The scaling factor may be less than a scaling limit divided by a floating point computation precision limit. The fixed point value may be converted to an integer value. The integer value may be the top-left integer sampling position of the fixed point value. An interpolation filter coefficient may be determined based on a distance between the fixed point value and the integer value. The content of the first projection format may be converted to the content of the second projection format based on the interpolation filter coefficient.

Inventors:
HE, Yuwen (13542 Silver Vine Path, San Diego, CA, 92130, US)
HANHART, Philippe (7916 Avenida Navidad, Apartment 150San Diego, CA, 92122, US)
YE, Yan (5001 Pearlman Way, San Diego, CA, 92130, US)
Application Number:
US2018/022892
Publication Date:
September 20, 2018
Filing Date:
March 16, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VID SCALE, INC. (200 Bellevue Parkway, Suite 300Wilmington, DE, 19809, US)
International Classes:
G06T3/00
Foreign References:
US5054097A1991-10-01
Other References:
SUNYOUNG LEE ET AL: "Fast Affine Transform for Real-Time Machine Vision Applications", 1 January 2006, INTELLIGENT COMPUTING LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, PAGE(S) 1180 - 1190, ISBN: 978-3-540-37271-4, XP019038487
HE Y ET AL: "AHG8: Platform independent floating point to integer conversion for 360Lib", 6. JVET MEETING; 31-3-2017 - 7-4-2017; HOBART; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-F0041, 24 March 2017 (2017-03-24), XP030150697
LEONG M P ET AL: "Automatic floating to fixed point translation and its application to post-rendering 3D warping", FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, 1999. FCCM '99. PROCEEDI NGS. SEVENTH ANNUAL IEEE SYMPOSIUM ON NAPA VALLEY, CA, USA 21-23 APRIL 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 21 April 1999 (1999-04-21), pages 240 - 248, XP010359148, ISBN: 978-0-7695-0375-2, DOI: 10.1109/FPGA.1999.803686
SREEDHAR KASHYAP KAMMACHI ET AL: "Viewport-Adaptive Encoding and Streaming of 360-Degree Video for Virtual Reality Applications", 2016 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), IEEE, 11 December 2016 (2016-12-11), pages 583 - 586, XP033048306, DOI: 10.1109/ISM.2016.0126
None
Attorney, Agent or Firm:
HOWE, Richard, A. et al. (Condo Roccia Koptiw LLP, 1800 JFK Blvd. Suite 170, Philadelphia PA, 19103, US)
Download PDF:
Claims:
What is Claimed:

1 . A device that converts content of a first projection format to content of a second projection format via a projection format conversion, the device comprising:

a processor configured to:

identify a first sample position associated with the content of the first projection format, wherein the first sample position associated with the content of the first projection format is used in the projection format conversion and the first sample position is represented as a floating point value;

identify a scaling factor for converting the first sample position represented as a floating point value to a first sample position represented as a fixed point value, wherein the scaling factor is less than a scaling limit divided by a floating point computation precision limit;

convert the first sample position represented as a floating point value to the first sample position represented as a fixed point value based on the scaling factor and the first sample position represented as a floating point value;

convert the first sample position represented as a fixed point value to a second sample position represented as an integer value, wherein an interpolation filter coefficient is determined based on a distance between the first sample position represented as a fixed point value and the second sample position represented as an integer value; and

convert the content of the first projection format to the content of the second projection format based on the interpolation filter coefficient.

2. The device of claim 1 , wherein the processor is further configured to convert the first sample position represented as a fixed point value to the second sample position represented as an integer value by:

determining a horizontal component of the second sample position represented as an integer value by performing a first flooring of a horizontal component of the first sample position represented as a fixed point value; and

determining a vertical component of the second sample position represented as an integer value by performing a second flooring of a vertical component of the first sample position represented as a fixed point value.

3. The device of claim 1 , wherein the second sample position represented as an integer value is a top-left integer sampling position of the first sample position.

4. The device of claim 1 , wherein the floating point computation precision limit is 1012 for a sample position represented as a floating point value having a double precision or 106 for a sample position represented as a floating point value having a single precision.

5. The device of claim 1 , wherein the first sample position represented as a floating point value has a single precision or a double precision.

6. The device of claim 1 , wherein the processor is further configured to determine the first sample position represented as a fixed point value by performing a first multiplication of the first sample position represented as a floating point value with the scaling factor, performing a first rounding of the first multiplication, and dividing the first rounding of the first multiplication by the scaling factor.

7. The device of claim 1 , wherein the first projection format is an equirectangular format and the second projection format is a cubemap format.

8. The device of claim 1 , wherein the scaling factor is a power of 2.

9. The device of claim 8, wherein the processor is further configured to convert the first sample position represented as a fixed point value to the second sample position represented as an integer value by performing a rounding of a product of the first sample position represented as a fixed point value and the scaling factor, and right-shifting the product of the rounding of the first sample position represented as a fixed point and the scaling factor.

10. The device of claim 8, wherein the processor is further configured to convert the first sample position represented as a fixed point value to the second sample position represented as an integer value by performing a flooring of a product of the first sample position represented as a fixed point value and the scaling factor, and right-shifting the product of the flooring of the first sample position represented as a fixed point value and the scaling factor.

1 1. A method for converting content of a first projection format to content of a second projection format via a projection format conversion, the method comprising:

identifying a first sample position associated with the content of the first projection format, wherein the first sample position associated with the content of the first projection format is used in the projection format conversion and the first sample position is represented as a floating point value;

identifying a scaling factor for converting the first sample position represented as a floating point value to a first sample position represented as a fixed point value, wherein the scaling factor is less than a scaling limit divided by a floating point computation precision limit;

converting the first sample position represented as a floating point value to the first sample position represented as a fixed point value based on the scaling factor and the first sample position represented as a floating point value;

converting the first sample position represented as a fixed point value to a second sample position represented as an integer value, wherein an interpolation filter coefficient is determined based on a distance between the first sample position represented as a fixed point value and the second sample position represented as an integer value; and

converting the content of the first projection format to the content of the second projection format based on the interpolation filter coefficient.

12. The method of claim 1 1 , wherein the scaling factor is a power of 2.

13. The method of claim 12, wherein the first sample position represented as a fixed point value is converted to the second sample position represented as an integer value by performing a rounding of a product of the first sample position represented as a fixed point value and the scaling factor, and right- shifting the product of the rounding of the first sample position represented as a fixed point and the scaling factor.

14 The method of claim 12, wherein the first sample position represented as a fixed point value is converted to the second sample position represented as an integer value by performing a flooring of a product of the first sample position represented as a fixed point value and the scaling factor, and right- shifting the product of the flooring of the first sample position represented as a fixed point value and the scaling factor.

15. A device that converts a floating point value to an integer value, the device comprising:

a processor configured to:

identify the floating point value to be converted to the integer value;

identify a scaling factor for converting the floating point value to a fixed point value, wherein the scaling factor is less than a scaling limit divided by a floating point computation precision limit;

convert the floating point value to the fixed point value based on the floating point value and the scaling factor, and

convert the fixed point value to the integer value.

16. The device of claim 15, wherein the scaling factor is a number that is positive and even.

17. The device of claim 15, wherein the processor is further configured to convert the floating point value to the fixed point value by performing a first multiplication of the floating point value with the scaling factor, performing a first rounding of the first multiplication, and dividing the first rounding of the first multiplication by the scaling factor.

18. The device of claim 15, wherein the scaling factor is a power of 2.

19. The device of claim 18, wherein the processor is further configured to determine the integer value by performing a rounding of a product of the fixed point value and the scaling factor, and right-shifting the product of the rounding of the fixed point value and the scaling factor.

20. The device of claim 18, wherein the processor is further configured to determine the integer value by performing a flooring of a product of the fixed point value and the scaling factor, and right-shifting the product of the flooring of the fixed point value and the scaling factor.

Description:
FLOATING POINT TO INTEGER CONVERSION

FOR 360-DEGREE VIDEO PROJECTION FORMAT

CONVERSION AND SPHERICAL METRICS CALCULATION

CROSS-REFERENCE

[0001] This application claims the benefit of U.S. Provisional Application No. 62/472,212, filed on March 16, 2017, which is incorporated herein by reference as if fully set forth.

BACKGROUND

[0002] Virtual reality (VR) is increasingly entering our daily lives. VR has many application areas, including healthcare, education, social networking, industry design/training, game, movie, shopping, entertainment, etc. VR is gaining attention from industries and consumers because VR is capable of bringing an immersive viewing experience. VR creates a virtual environment surrounding the viewer and generates a true sense of "being there" for the viewer. How to provide the full real feeling in the VR environment is important for a user's experience. For example, the VR system may need to support interactions through posture, gesture, eye gaze, voice, etc. To allow the user to interact with objects in the VR world in a natural way, the VR may provide haptic feedback to the user.

SUMMARY

[0003] A system, method, and/or instrumentality may be provided for a floating point to fixed point conversion and/or a floating point to integer conversion. Content of a first projection format may be converted to content of a second projection format via a projection format conversion. A sample position associated with the content of the first projection format may be used in the projection format conversion. The sample position may be represented as a floating point value, a fixed point value, or an integer value. A scaling factor for converting the sample position represented as a floating point value to a sample position represented as a fixed point value may be identified. The scaling factor may be less than a scaling limit divided by a floating point computation precision limit. The sample position represented as a floating point value may be converted to the sample position represented as a fixed point value, for example, based on the scaling factor.

[0004] A sample position represented as a fixed point value may be converted to a sample position represented as an integer value. The sample position represented as an integer value may be the top-left integer sampling position of the sample position represented as a fixed point value. An interpolation filter coefficient may be determined based on the sample position represented as the fixed point value. For example, an interpolation filter coefficient may be determined based on a distance between the sample position represented as a fixed point number and the sample position represented as an integer value. The sample position represented as an integer value may be associated with the content of the first projection format. The content of the first projection format may be converted to the content of the second projection format, for example, based on the interpolation filter coefficient.

[0005] The scaling factor may be a power of 2. The sample position represented as a fixed point value may be converted to the sample position represented as an integer value via a shifting (e.g., a right shifting). For example, the sample position represented as a fixed point value may be converted to a sample position represented as an integer value by performing a rounding of a product of the sample position represented as a fixed point value and the scaling factor, and right-shifting the product of the rounding of the sample position represented as a fixed point and the scaling factor. The sample position represented as a fixed point value may be converted to a sample position represented as an integer value by performing a flooring of a product of the sample position represented as a fixed point value and the scaling factor, and right-shifting the product of the flooring of the sample position represented as a fixed point value and the scaling factor.

[0006] Content of a first projection format may be converted to content of a second projection format via a projection format conversion. A first sample position associated with the content of the first projection format may be identified. The first sample position associated with the content of the first projection format may be used in the projection format conversion and/or the first sample position may be represented as a floating point value. The first sample position represented as a floating point value may be converted to a first sample position represented as a fixed point value. The first sample position represented as a fixed point value may be converted to a second sample position represented as an integer value. For example, the first sample position represented as a fixed point value may be converted to a second sample position represented as an integer value based on a rounding of the first sample position represented as a fixed point value. The second sample position represented as an integer value may be the first sample position's nearest integer sample position. The content of the first projection format may be converted to the content of the second projection format based on the interpolation filter coefficient. The first sample position represented as a floating point value may be converted to the first sample position represented as a fixed point value based on a rounding of the first sample position represented as a floating point value.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings.

[0008] FIG. 1 A shows an example sphere sampling in longitude and latitude.

[0009] FIG. 1 B shows an example 2D planar with equirectangular projection.

[0010] FIG. 2A shows an example equirectangular picture.

[0011] FIG. 2B shows an example uneven vertical sampling in 3D space with equal latitude interval.

[0012] FIG. 3 shows an example sphere geometry representation with cubemap projection, PX (0), NX (1), PY (2), NY (3), PZ (4), NZ (5).

[0013] FIG. 4 shows an example 3x2 frame packed picture with cubemap projection.

[0014] FIG. 5 shows an example 360-degree video processing workflow with stitching in front end.

[0015] FIG. 6 shows an example interpolation filter coefficient derivation used in a projection conversion.

[0016] FIG. 7A shows an example using a look up table (LUT) to approximate non-linear function.

[0017] FIG. 7B shows an example using LUT + linear interpolation to approximate non-linear function.

[0018] FIG. 8A is a system diagram of an example communications system.

[0019] FIG. 8B is a system diagram of an example wireless transmit/receive unit (WTRU) that may be used within the communications system illustrated in FIG. 8A.

[0020] FIG. 8C is a system diagram of an example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.

[0021] FIG 8D is a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.

[0022] FIG. 8E is a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.

DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS

[0023] A detailed description of illustrative embodiments will now be described with reference to the various Figures. Although this description provides a detailed example of possible implementations, it should be noted that the details are intended to be exemplary and in no way limit the scope of the application.

[0024] Virtual reality (VR) systems may use 360-degree video. For example, VR systems may use 360-degree video to provide the users the capability to view a scene from 360-degree angles in the horizontal direction and/or 180-degree angles in the vertical direction. VR and/or 360-degree video may be considered to be the direction for media consumption beyond Ultra High Definition (UHD) service. Work on the requirements and/or potential technologies for omnidirectional media application format may be performed to improve the quality of 360-degree video in VR and/or may standardize the processing chain for client's interoperability. Free view TV (FTV) may test the performance of a 360-degree video

(omnidirectional video) based system and/or a multi-view based system.

[0025] The quality and/or experience of one or more aspects in the VR processing chain may be improved. For example, the quality and/or experience of one or more aspects in capturing, processing, displaying, etc., of VR processing may be improved. On the capturing side, VR may use one or more cameras to capture the scene from one or more views (e.g., 2-12 views). The views may be stitched together, for example, to form 360-degree video in high resolution (e.g., 4K or 8K). On the client or user side, the VR system may include a computation platform, head mounted display (HMD), and/or head tracking sensors. The computation platform may determine the receiving and/or the decoding of 360- degree video, and/or generating the viewport for display. Two pictures (e.g., one picture for each eye), may be rendered for the viewport. The two pictures may be displayed in HMD, for example, for stereo viewing. The lens may be used to magnify the image displayed in HMD (e.g., for better viewing). The head tracking sensor may keep (e.g., constantly keep) track of the viewer's head orientation and/or may provide (e.g., feed) the orientation information to the system, for example, to display the viewport picture for the orientation. VR systems may provide a device (e.g., a specialized touch device) for a viewer to interact with one or more objects in the virtual world. A VR system may be driven by a device (e.g., a powerful workstation) with good graphics processing unit (GPU) support. A VR system may be a light VR system. The light VR system may use a smartphone as a computation platform, HMD display, and/or head tracking sensor. The spatial HMD resolution may be 2160x1200, the refresh rate may be 90Hz, and/or the field of view (FOV) may be 1 10 degrees. The sampling rate for a head tracking sensor may be 1000Hz and/or may capture fast movement. A VR system may include a lens and/or a cardboard. The VR system may be driven by a smartphone.

[0026] The quality of experience (e.g., interactivity and/or haptic feedback) may be improved in VR systems. For example, HMD may be too big and may not be convenient to wear. Resolution of stereoscopic views (e.g., provided by the HMDs) may be increased. The feeling from vision in VR environment (e.g., with the force feedback in the real world) may be combined. A VR roller coaster may be an example application.

[0027] A 360-degree video compression and/or delivery system may be provided. For example, a channel for DASH based 360-degree video streaming may be provided. 360-degree video delivery may represent 360-degree information, for example, 360-degree information using a sphere geometry structure. For example, synchronized multiple views may be captured by multiple cameras. The synchronized multiple views may be stitched on the sphere, for example, as an integral structure. The sphere information may be projected to a 2D planar surface, for example, with a projection format. An example projection format may be an equirectangular projection (ERP). FIG. 1A shows an example sphere sampling in longitude (φ) and latitude (Θ). FIG. 1 B shows an example sphere being projected to 2D plane using ERP. The longitude φ in the range [-π, π] may indicate yaw, and the latitude Θ in the range [-π/2, π/2] may indicate pitch in aviation, π may indicate the ratio of a circle's circumference to the circle's diameter, (x, y, z) may indicate a point's coordinates in 3D space, and/or (ue, ve) may indicate a point's coordinates in a 2D plane. An equirectangular projection may be represented in Equations (1 ) and (2):

ue = (φ/(2* n)+0.5)*W (1)

where W and H may be the width and height of the 2D rectangular picture. As shown in FIGs. 1 A and 1 B, the point P, the cross point between longitude L4 and latitude A1 on the sphere, may be mapped to a unique point q in the 2D plane using Equation (1 ) and (2). The point q in 2D plane may be projected to the point P on the sphere, for example, via inverse projection. The field of view (FOV) in FIG. 1 B shows an example that the FOV in sphere may be mapped to a 2D plane with the view angle along X axis being about 110 degrees.

[0028] With equirectangular projection, a 2D planar picture may be treated as a 2D video. A 2D planar picture may be encoded with a video codec (e.g., H.264, HEVC). The 2D planar picture may be delivered to a client. At the client side, the frame packed 2D rectangular video may be decoded. The frame packed 2D rectangular video may be rendered based on a user's viewport, for example, by projecting and/or displaying the portion of the FOV in the equirectangular picture onto the HMD. The characteristic of equirectangular projected 2D picture may be different from a conventional 2D picture (e.g., rectilinear video). For example, though spherical video may be transformed to a 2D planar picture for encoding with ERP, the characteristic of equirectangular projected 2D picture may be different from a conventional 2D picture.

[0029] FIG. 2A shows an example equirectangular projected picture. The top portion of the picture

(e.g., corresponding to North Pole) and/or the bottom portion (e.g., corresponding to South Pole) may be stretched, which may indicate that the equirectangular sampling in the 2D spatial domain may be uneven. FIG. 2B shows an example warping effect. The warping effect may occur if a sampling with an equal latitude interval in a sphere is applied. SO, S1 , S2 and S3 may indicate that the sampling interval in latitude may be equal. The spatial distances dO, d1 , d2, and d3 may indicate the distance when sampling distances SO, S1 , S2, and S3 are projected onto the 2D plane and the spatial distances dO, d1 , d2, and d3 are different. The object near pole areas may be squashed in the vertical direction. For example, if an object moves (e.g., translationally moves) from the equator to the pole on the sphere, the shape of the object projected on the 2D plane may be changed. The shape of the object may be changed as the object moves across the corresponding positions on the 2D plane, for example, after equirectangular projection. The motion field corresponding to the object in the 2D plane among the temporal direction may be determined. A translational model may be used to describe a motion field. Areas closer to the poles may be less interesting for viewers and/or content providers, for example, compared to the areas closer to the equator. For example, the viewer may not focus on the top and bottom regions for a long duration. Based on the warping effect, the areas may be stretched to become a large portion of the 2D plane after equirectangular project. Equirectangular picture coding may include applying pre-processing (e.g., smoothing) to the pole areas, for example, to reduce the bandwidth required to code the pole areas. Geometric structures (e.g., different geometric structures) representing 360-degree video may be provided. Geometric structures may include a cubemap, cylinder, pyramid, etc. Among projections, a geometry (e.g., a compression friendly geometry) may be the cubemap, which may have 6 faces. A face of the cubemap may be a planar square.

[0030] An equirectangular format may be supported in 360-degree cameras and/or stitching software. To encode 360-degree video in cubemap projection (CMP) format, an equirectangular projection format may be converted to a cubemap projection format. An equirectangular projection and cubemap projection may be related. FIG. 3 shows an example where a (e.g., each) face with a (e.g., each) of the three axes go from the center of the sphere to the center of the face. 'Ρ' may stand for positive and 'N' may stand for negative. PX may indicate the direction along a positive X axis from the center of sphere, PY may indicate the direction along a positive Y axis from the center of sphere, and/or PZ may indicate the direction along a positive Z axis from the center of sphere. NX may indicate the reverse direction of PX, NY may indicate the reverse direction of PY, and/or NZ may indicate the reverse direction of PZ. 6 faces (e.g., PX, NX, PY, NY, PZ, NZ) may correspond to the front, back, top, bottom, left, and right faces, respectively. The faces may be indexed from 0 to 5. Ps (e.g., X_s, Y_s, Z_s) may be the point on the sphere with radius being 1 . Ps may be represented in yaw φ and pitch Θ, as follows:

cos(9)cos(cp) (3)

sin(9) (4)

-cos(9)sin(cp) (5) [0031] Pf may be the point on the cube when extending the line from the sphere center to Ps. Pf may be on face NZ. The coordinates of Pf, (X_f, Y_f, Z_f) may be calculated as:

X_f = X_s/|Z_s| (6)

Y_f = Y_s/|Z_s| (7)

Z_f = -1 (8),

where |x| may be the absolute value of variable x. The coordinates of Pf, (uc, vc), in the 2D plane of face NZ, may be calculated as:

uc = W*(1 -X_f)/2 (9)

vc = H*(1-Y_f)/2 (10)

[0032] From Equation (3) to (10), a relationship may be generated between the coordinates (uc, vc) in cubemap on a face and the coordinates (φ, Θ) on the sphere. The relationship may be determined between equirectangular point (ue, ve) and the point (φ, Θ) on the sphere from Equation (1 )(2). The relationship between equirectangular projection and cubemap projection may be determined. The geometry mapping from cubemap to equirectangular may be described as follows. Given the point (uc, vc) on one face in cubemap, the output (ue, ve) on the equirectangular plane may be calculated as follows. The coordinates of 3D point P_f on the face with (uc, vc) may be calculated according to the relationship in Equation (9)(10). The coordinates of 3D point P_s on the sphere with P_f may be calculated according to the relationship in Equation (6)(7)(8). The (φ, Θ) on the sphere with P_s may be calculated according to the relationship in Equation (3)(4)(5). The coordinates of point (ue, ve) on the equirectangular picture from (φ, Θ) may be calculated according to the relationship in Equation (1 )(2).

[0033] The 6 faces of the cubemap may be packed into a rectangular area, which may be referred to as frame packing. For example, in order to represent the 360-degree video in a 2D rectangular picture using cubemap, the faces of the cubemap may be packed into a rectangular area. The frame packed pictures may be treated (e.g., coded) as a 2D picture (e.g., a normal 2D picture). There may be one or more frame packing configurations, such as 3x2 and 4x3. For example, in a 3x2 configuration, the 6 faces may be packed into 2 rows (e.g., with 3 faces in one row). In the 4x3 configuration, the 4 faces PX, NZ, NX, PZ may be packed into one row (e.g., the center row), and/or the faces PY and NY may be packed (e.g., separately packed) into two rows (e.g., different rows, such as the top and bottom rows).

[0034] A 360-degree video in equirectangular format may be converted into a cubemap format. For a (e.g., each) sample position (uc, vc) in a cubemap format, coordinates (e.g., the corresponding coordinates (ue, ve)) in an equirectangular format may be calculated. If the coordinates (ue, ve) in an equirectangular format are not at an integer sample position, an interpolation filter may be applied, for example, to obtain the sample value at the fractional position (e.g., using samples from the neighboring integer positions). [0035] As show in FIG. 4, using cubemap, the warping problem in equirectangular format (e.g., the sky and ground are stretched in FIG. 2A) may be avoided. For example, within a (e.g., each) face, the object may be the same as a normal 2D picture without warping. There may be 6 sub-pictures corresponding to 6 faces in the cubemap in FIG. 3.

[0036] An example work flow for a 360-degree video system is depicted in FIG. 5. The work flow for the 360-degree video system may include a 360-degree video capture using one or more cameras to capture videos covering the sphere space (e.g., the entire sphere space). The videos may be stitched together. For example, the videos may be stitched together in an equirectangular geometry structure. The equirectangular geometry structure may be converted to another geometry structure (such as cubemap) for encoding (e.g., encoding with existing video codecs). The coded video may be delivered to the client (e.g., via dynamic streaming and/or broadcasting). At the receiver, the video may be decoded, and/or the decompressed frame may be unpacked and/or converted to a display geometry (e.g., equirectangular). The video in the display geometry may be used for rendering, for example, via viewport projection according to a user's viewing angle.

[0037] One or more trigonometric functions may be used in the projection format conversion. The trigonometric functions may be non-linear trigonometric functions. The calculation may be performed in floating point precision for the geometry projection format conversion. For example, to achieve a desired quality for the converted result, the calculation may be performed in floating point precision for the geometry projection format conversion (e.g., due to the non-linear functions).

[0038] Interpolation may be performed in the projection format conversion. One or more interpolation filter coefficients may be applied for a sample derivation, such as a nearest neighbor, bilinear, bicubic, and/or Lanczos filter. An interpolation filter coefficient may be derived using a distance from a sample position (e.g., a sample position of the source projection format projected from a sample position of the destination projection format) to the sample position's top-left integer sampling position. For example, a backwards projection may be used to project a sample position of the destination projection format to a sample position of the source projection format (e.g., point P). The sample position of the destination projection format may be an integer sampling position. The sample position of the source projection format may be point P. Point P may be located at a non-integer (e.g., a floating point or a fixed point) sampling position. Point P may be an integer. Point SO may be the top-left integer sampling position of point P; w and h may be the sample width and height.

[0039] The sample value at a non-integer sample position (e.g., point P) may be derived with an interpolation using the neighboring samples at integer sample positions. For example, the sample value at point P may be derived via an interpolation using sample values at sample positions SO, S1 , S2, and/or S3.

In the interpolation, a (e.g., each) neighboring sample value at an integer position may be multiplied by an interpolation filter coefficient corresponding to that integer sample position, and may be added to derive the interpolated sample value at the non-integer sample position (e.g., point P). An interpolation filter coefficient at an integer sample position may be derived using "dx" and/or "dy." For example, w(S0) may be the interpolation filter coefficient for sample position SO, w(S1) may be the interpolation filter coefficient for sample position S1 , w(S2) may be the interpolation filter coefficient for sample position S2, and/or w(S3) may be the interpolation filter coefficient for sample position S3. Sample values at SO, S1 , S2 and S3 may be used for an interpolation of a sample value at position P, for example, in a bilinear interpolation.

[0040] The interpolation filter may be separable. If the interpolation filter is separable, the horizontal interpolation filter may be derived using "dx" and/or the vertical interpolation filter may be derived using "dy." For example, if bilinear interpolation is applied, the interpolation filter coefficients w(S) for sample position SO, S1 , S2, S3 may be calculated as:

wx = dx/w; wy = dy/h

w(S0) = (1 -wx)*(1 -wy)

w(S1) = wx*(1 -wy)

w(S2) = (1-wx)*wy

w(S3) = wx*wy

The sample values at sample positions SO, S1 , S2, S3 may be used for the bilinear interpolation to derive the sample value at position P.

[0041] The sample values at sample positions (e.g., SO, S1 , S2, and/or S3) may be determined using one or more of point P, dx, and/or dy. For example, point P's position in a horizontal direction (e.g., Px, as shown on FIG. 6) and/or point P's position in the vertical direction (e.g., Py, as shown in FIG. 6) may be identified and/or determined. Sample position SO may be a top-left integer position of point P. The sample position SO may be determined by using the values of Px and/or Py. For example, the horizontal component of the sample position SO may be determined as Floor(Px). The vertical component of the sample position SO may be determined as Floor(Py). The values of Px and/or Py may be fixed point values.

[0042] The nearest top-left integer sampling position (e.g., SO) may be used (e.g., identified), for example, to evaluate "dx" and "dy." The coordinate of P may be represented in floating point precision. A Floor() function may be applied to convert a floating point number to a nearest integer number. The integer number may not be greater than the input floating point value.

[0043] A nearest neighbor method may be used. In a nearest neighbor method, one or more interpolations may be replaced or removed. In a nearest neighbor method, a top-left integer sampling position (e.g., SO) may not be determined. If the nearest neighbor method is used, the nearest integer sample position of the non-integer sample position (e.g., point P) may be derived. The floating point value of the non-integer sample position (e.g., point P) may be converted to a fixed point value, as described herein. For example, the floating point value of the non-integer sample position (e.g., point P) may be converted to a fixed point value via a Round() function. The fixed point value of the non-integer sample position (e.g., point P) may be converted to an integer value. For example, a Round() function may be used to convert the fixed point value of the non-integer sample position (e.g., point P) to the integer value of the sample position's nearest integer sample position. The content of the first projection format may be converted to the content of the second projection, based on the sample position's nearest integer sample position.

[0044] As described herein, Round() may (e.g., may also) be used to convert the floating point value to a fixed point value. For example, the interpolation filtering may be implemented in fixed point precision (e.g., where the interpolation filtering may use Round() to convert the filter coefficients from floating point precision to fixed point precision). The Round() function may be represented as Equation (1 1), where x may be a floating point variable and/or the Truncate() function may truncate the fraction part and keep (e.g., only keep) the integer part.

truncateix + 0.5), - if x≥ 0

Round (x) = (1 1)

truncate(x— 0.5), ··· if x < 0

[0045] One or more spherical metric calculations may be performed. The metrics may include a spherical PSNR without interpolation (S-PSNR-NN), a spherical PSNR with interpolation (S-PSNR-I), a weighted to spherically uniform (WS-PSNR), etc. S-PSNR may be used to calculate PSNR, for example, based on a set of points sampled (e.g., uniformly sampled) on the sphere. In a S-PSNR-NN calculation, a nearest neighbor method may be applied, for example, to find the sample value at a position (e.g., the corresponding position) on the projection plane. The Round() function may be used to determine the nearest integer sampling position. In S-PSNR-I calculation, the interpolation may be applied to find the sample value at the corresponding position on the projection plane. The Floor() function may be used to derive the interpolation filter coefficients.

[0046] 360-degree video may be converted from one projection format to another projection format. When converting 360-degree video from one projection format to another projection format (e.g., from ERP to CMP), a floating point calculation may be used (e.g., may be used for geometry projection). The floating point number may be converted to an integer number (e.g., at certain stage(s)). The floating point based computation may cause the projection format conversion result to be platform dependent.

[0047] Floating point representation may support a dynamic range (e.g., a large dynamic range) with a precision (e.g., a high precision). For example, floating point may be represented as a single precision using 32 bits and/or a double precision using 64 bits (e.g., according to an IEEE floating point standard). The floating point computation may be slower (e.g., slower, compared to fixed point computation), and/or the computation result may not be the same under one or more platforms. The floating point calculation may be performed by a floating point process unit (FPU), for example, for acceleration. Results of the floating point calculations performed by different FPUs may be different because the precision for intermediate results may be different. Higher precision for intermediate results may result in a higher cost for the hardware.

[0048] In the projection conversion, one or more trigonometric functions (e.g., sin(), cos() and/or tan()) may be used. The functions may be implemented by one or more intrinsic libraries which may be provided with a compiler. For a compiler, the libraries may have one or more implementations in one or more compiler versions. The result may be different if a compiler and/or a compiler version is different, for example, even if the hardware (e.g., FPU) is the same. The difference in the result may be small. The difference in the result may make the results unrepeatable, for example, using one or more compilers and/or platforms.

[0049] The difference in geometry projection may cause different interpolation results. For example, the difference in geometry projection may cause different interpolation results after floating point numbers are converted to integer numbers. For example, as shown in FIG. 6, Px may be equal to 'm' in the first platform, and Px may be equal to (m-Δ) in the second platform, 'm' may be an integer number and/or Δ may be a small difference (e.g. 10 10 ) due to floating point computation on one or more platforms. One or more top-left integer sample positions may result, for example, when the top-left integer sample position is computed using the Floor() function. In the first platform, the top-left integer position may be (m, Floor(Py)). In the second platform, the top-left integer position may be (m-1 , Floor(Py)). The integer samples may be used for interpolation, for example, to derive the sample value at position P. The Round() function (e.g., defined in Equation (11 )) may behave similarly. If the input variables are 0.5, or approximately .5, a difference (e.g., a difference due to a floating point computation) may result in one or more integer values, for example, after applying the Round() function.

[0050] The projection format conversion result may not be a final result. The projection format conversion may be performed by following a process (e.g., an encoding process). The issue caused by floating point computation may make the system unrepeatable on one or more platforms and/or may make the system more difficult for verification on one or more platforms. The validation process of the system (e.g., including projection format conversion and/or subsequent processes) may become complex.

[0051] Trigonometric functions and/or non-linear functions (e.g., square root) may be implemented in fixed point. The fixed point implementation may use a look-up table (LUT). The LUT may be combined with interpolation, for example, to reduce the size of the look-up table and/or to keep the precision. The input value may belong to the set of control points (e.g., defined for LUT). If the input value belongs to the set of control points defined for LUT, the LUT may be applied (e.g., applied directly).

Applying interpolation with the LUT may determine LUT results of the neighboring control points where the input value may be between. Interpolation with the end points and the LUT results may be applied, for example, to derive the LUT value for the input value. The interpolation may be a linear or non-linear function (e.g., polynomial function with higher orders). FIGs. 7A and 7B show examples of the two cases. For example, FIG. 7A uses LUT to approximate the non-linear function f(X); FIG. 7B uses LUT and linear interpolation to approximate the function f(X). The LUT (e.g., LUT only) technique may use more control points (e.g., LUT size), for example, for accuracy (e.g., to achieve a high approximation accuracy). The LUT and interpolation may reduce the LUT size and/or result in a higher computation cost. Non-linear interpolation may be used to increase approximation accuracy, for example, using curve fitting techniques to approximate the trigonometric functions. For example, quadratic functions may be used to reduce large curve fitting errors, such as the error for the first [CO, C1] segment. A non-uniform partition of the X axis may be applied. As the trigonometric curve flattens out and becomes more similar to a linear function (e.g., as the X axis value increases), larger partition intervals may be used (e.g., to reduce the LUT size).

[0052] Floating point numbers may be aligned by converting the floating point numbers to fixed point numbers, for example, before being converted to integer numbers.

[0053] A sample position may be identified. The sample position may be associated with content of a first projection format. The sample position may be used in a projection format conversion, as described herein. The sample position may be represented as a floating point value or as a fixed point value. The floating point value may be single precision or double precision. The sample position may be converted from a floating point value to an integer value or from a fixed point value to an integer value. The sample position may be converted from a floating point value to a fixed point value, and the fixed point value may be converted to an integer value.

[0054] A scaling factor (e.g., S) may be identified. The scaling factor S may be used to convert the sample position from a floating point value to a fixed point value. The scaling factor S may be used to convert the sample position from a fixed point value to an integer value. As described herein, the scaling factor may be less than a scaling limit divided by a floating point computation precision limit. The scaling limit may be predefined or the scaling limit may be determined dynamically. For example, the scaling limit may be defined as 0.5. The floating point computation precision limit may be a generic floating point computation precision limit. The generic floating point computation precision limit may be denoted as Δ. One or more floating point computation precision limits may be associated with one or more particular floating point numbers. For example, a floating point computation precision limit may be associated with a floating point number 'a' and/or a floating point number 'b,' as described herein. In such an example, the floating point computation precision limits may be denoted as a A a and/or At>.

[0055] Two floating point numbers, 'a' and 'b,' may be provided and/or identified. The floating point numbers may be at, or approximate to, the midway of two neighboring integers. a = k + 0.5 + A a

b = k + 0.5 - A b

[0056] 'k' may be an integer. Floating point computation precision limits A a and/or Ab may be respective variances (e.g., small positive deltas), for example, of the floating point numbers 'a' and 'b' caused by floating point computation. Floating point computation precision limits of floating point numbers, 'a' and 'b' (e.g., A a and/or Ab), may have values that may be within the floating point computation precision. For example, floating point computation precision limits of floating point numbers 'a' and 'b' (e.g., A a and/or Ab) may have values that are around 10 -12 for double precision floating point computation and/or around 10 6 for single precision floating point computation. As provided herein, A may be a floating point computation precision limit. For example, A may be a generic floating point computation precision limit, such as a floating point computation precision limit that is not associated with a particular floating point number. The precision limit of floating point computation may be identified such that the following may be satisfied:

[0057] The Round() function (e.g., defined in Equation (11 )) may convert a and b to its nearest integer. If the Round() function converts a and b to its nearest integer, different results, as follows, may be provided.

Round(a) = Round(k + 0.5 + A a ) = Truncate{k + 1 + A a ) = k+1

Round(b) = Round(k + 0.5 - A b ) = Truncate^ + 1 - A b ) = k

[0058] Rounding may include the floating point being converted to fixed point and/or the fixed point being rounded to its nearest integer.

[0059] A scaling factor S may be determined and/or used for a floating point to fixed point conversion. The scaling factor S may be used to convert a sample position represented as a floating point value to a sample position represented as a fixed point value. Scaling factor S may be a positive number and/or scaling factor S may be an even number. If S is a positive even number used for floating point to fixed point conversion, the fixed point may be calculated. For example,

a fp = Round(a*S)/S = Truncate(a*S + 0.5)/S = Truncate(k*S + 0.5*S + S*A a +0.5)/S bf P = Round(b*S)/S = Truncate(b*S + 0.5)/S = Truncate(k*S + 0.5*S - S*A b +0.5)/S

[0060] If the following condition is satisfied:

A a *S < 0.5, and A b *S < 0.5, (13) then the following results may be provided if (k*S+S/2) is greater than or equal to 0:

a f p = Round(a*S)/S = Truncate(k*S + 0.5*S + S*A a +0.5)/S = (k*S + S/2)/S bf P = Round(b*S)/S = Truncate(k*S + 0.5*S - S*A b +0.5)/S = (k*S + S/2)/S otherwise, the following results may be provided if (k*S+S/2) is less than 0:

a f p = Round(a*S)/S = Truncate(k*S + 0.5*S + S*A a +0.5)/S = (k*S + S/2 +1 )/S bf P = Round(b*S)/S = Truncate(k*S + 0.5*S - S*A b +0.5)/S = (k*S + S/2 +1)/S

[0061] The fixed point value (e.g., the converted fixed point value) may be converted to an integer value. For example, a sample position represented as a fixed point value may be converted to a sample position represented as an integer value. The Round() function may be applied to the converted fixed point number to get the integer value of the converted fixed point number.

[0062] The sample position represented as an integer value may be used to determine a sample value in the nearest neighbor method, as described herein. For example, a nearest integer sampling position of a point may be identified and/or determined. The sample value at this nearest integer sample position may be used to as an approximation of the sample value at the position represented by the converted fixed point number.

[0063] Floor() may be used for filter coefficient derivation. For example, sample values at neighboring integer positions may be multiped by an interpolation filter coefficient (e.g., a corresponding interpolation filter coefficient). The sample values (e.g., the weighted sample values) may be added and divided by the sum of all interpolation filter coefficients as the interpolated sample value. The sum of the interpolation filter coefficients may equal 1 .

[0064] An integer number (e.g., the same integer number) may be provided for a and b, for example, because the fixed point numbers for a and b may be the same. Equation (14) defines an example Round() function.

Round' (x) = Round(Round(x * S)/S) 04)

[0065] The Floor() function may generate results (e.g., different results) when the floating point numbers are close to an integer. If there are two close floating point numbers a and b, and k is an integer, A a and Ab may be variances (e.g., small differences) caused by floating point computation on one or more platforms.

a = k + A a

[0066] If Floor() is applied (e.g., applied directly), the following results may be provided.

Floor(a) = Round(k + Aa) = k

Floor(b) = Round(k - A b ) = k-1

[0067] An example Floor() function, as in Equation (15), may be used such that the same results may be provided. The results may be converted to fixed point, and the Floor() function may be evaluated with a fixed point value. Floor'{x) = Floor(Round(x * S)/S) O^)

[0068] The results with Floor'O may be as follows if (k*S) is greater than or equal to 0, for example, by considering the constraint (13).

Floor'(a) = Floor(Round((k + A a )*S)/S) = Floor(Truncate((k + A a )*S + 0.5)/S) = k

Floor'(b) = Floor(Round((k - A )*S)/S) = Floor(Truncate((k - A )*S + 0.5)/S) = k

The results with Floor'O ma Y be as follows if (k*S) is less than 0, for example, by considering the constraint (13).

Floor'(a) = Floor(Round((k + A a )*S)/S) = Floor(Truncate((k + A a )*S + 0.5)/S) = k+1 Floor'(b) = Floor(Round((k - A )*S)/S) = Floor(Truncate((k - A )*S + 0.5)/S) = k+1

[0069] If the scaling factor (e.g., S) is selected and/or equals 2 N (N>=1 ), Equation (14)(15) may be simplified, for example, by replacing the division with right shift. 16) 1 7 )

[0070] The scaling factor S may be determined based on one or more parameters, as described herein. For example, the scaling factor S may be determined based on a scaling limit and/or a floating point computation precision limit (e.g ., a generic floating point computation precision limit, Δ). The scaling limit may be predefined or the scaling limit may be determined dynamically. For example, the scaling limit may be 0.5. The scaling factor S may be less than the scaling limit (e.g ., 0.5) divided by the floating point computation precision limit Δ. The scaling factor S may be greater than zero. The scaling factor S may be an even number or an integer number. For example, considering constrain (12)(13), the scaling factor S may be determined such that the following constraint may be satisfied .

S < 0.5/Δ, and S >0 and S is even number 0 8 )

[0071] The scaling factor S may affect the precision loss. For example, the scaling factor S may affect the precision loss because the scaling factor S may be used to convert floating point to fixed point. A larger scaling factor S may have a smaller conversion loss. For example, if the floating point computation precision limit Δ is the scale of 10 12 for double precision floating point computation, the scaling factor S may be in the range of [10 2 , 10 6 ]. If the floating point computation precision limit Δ is the scale of 10 "6 for single precision floating point computation, the scaling factor S may be in the range of [10 2 , 10 3 ]. [0072] The above may be applied to a floating point to integer conversion. For example, the above may be applied to projection format conversion and/or spherical metrics calculation (such as S- PSNR-NN and S-PSNR-I).

[0073] Floating point to integer conversion may be performed, for example, using an intermediate conversion. For example, a floating point may be converted to a fixed point value (e.g., a fixed point number with a fixed point precision), and the fixed point value may be converted to an integer value. Floating point numbers with differences (e.g., small differences) on different platforms may be mapped to a fixed point number. A scaling factor, S, may be used to convert the floating point value to a fixed point value. The scaling factor S may be pre-determined or dynamically determined (e.g., signaled). For example, the scaling factor S may be determined based on precision requirements. The fixed point number may be converted to an integer number.

[0074] The division operation(s) using Round() and/or Floor() for the conversion from floating point to fixed point may be replaced with one or more right shift operation(s). For example, one or more of the division operation(s) provided by equations (14) and (15) may be replaced by one or more of the right shift operation(s) provided by equations (16) and (17). The scaling factor S may be changed to 2 N for the right shift operation(s). With the right shift operations one or more division operations may be reduced or removed. The scaling factor S may be set to 2 N to reduce computation complexity.

[0075] The above can be applied to the floating point to fixed point conversion. For example, the above may be used to convert floating point to fixed point in a higher precision than the required precision. The above may be used to convert fixed point in the higher precision to final fixed point in the required precision. The difference (e.g., small difference) caused by floating point computation on one or more platforms may be removed by the conversion (e.g., the conversion in the first step).

[0076] FIG. 8A is a diagram of an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), and the like.

[0077] As shown in FIG. 8A, the communications system 100 may include wireless

transmit/receive units (WTRUs) 102a, 102b, 102c, and/or 102d (which generally or collectively may be referred to as WTRU 102), a radio access network (RAN) 103/104/105, a core network 106/107/109, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 1 12, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like.

[0078] The communications systems 100 may also include a base station 1 14a and a base station 1 14b. Each of the base stations 1 14a, 1 14b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the core network 106/107/109, the Internet 1 10, and/or the networks 1 12. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.

[0079] The base station 1 14a may be part of the RAN 103/104/105, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 1 14a and/or the base station 114b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 1 14a may include three transceivers, i.e., one for each sector of the cell. In another embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.

[0080] The base stations 114a, 1 14b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 115/116/117, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 115/1 16/117 may be established using any suitable radio access technology (RAT).

[0081] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA,

OFDMA, SC-FDMA, and the like. For example, the base station 1 14a in the RAN 103/104/105 and the

WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile

Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/1 16/1 17 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High- Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).

[0082] In another embodiment, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 115/116/117 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A).

[0083] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.

[0084] The base station 1 14b in FIG. 8A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like. In one embodiment, the base station 1 14b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In another embodiment, the base station 1 14b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 1 14b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell. As shown in FIG. 8A, the base station 1 14b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the core network 106/107/109.

[0085] The RAN 103/104/105 may be in communication with the core network 106/107/109, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. For example, the core network 106/107/109 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 8A, it will be appreciated that the RAN 103/104/105 and/or the core network 106/107/109 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 103/104/105 or a different RAT. For example, in addition to being connected to the RAN 103/104/105, which may be utilizing an E-UTRA radio technology, the core network 106/107/109 may also be in communication with another RAN (not shown) employing a GSM radio technology.

[0086] The core network 106/107/109 may also serve as a gateway for the WTRUs 102a, 102b,

102c, 102d to access the PSTN 108, the Internet 1 10, and/or other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 1 10 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired or wireless communications networks owned and/or operated by other service providers. For example, the networks 1 12 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 103/104/105 or a different RAT.

[0087] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities, i.e., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU 102c shown in FIG. 8A may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.

[0088] FIG. 8B is a system diagram of an example WTRU 102. As shown in FIG. 8B, the WTRU 102 may include a processor 1 18, a transceiver 120, a transmit/receive element 122, a

speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and other peripherals 138. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment. Also, embodiments contemplate that the base stations 1 14a and 114b, and/or the nodes that base stations 1 14a and 114b may represent, such as but not limited to transceiver station (BTS), a Node-B, a site controller, an access point (AP), a home node-B, an evolved home node-B (eNodeB), a home evolved node-B (HeNB), a home evolved node-B gateway, and proxy nodes, among others, may include some or all of the elements depicted in FIG. 8B and described herein.

[0089] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 8B depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip. [0090] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 1 15/116/117. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.

[0091] In addition, although the transmit/receive element 122 is depicted in FIG. 8B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 1 15/1 16/117.

[0092] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.1 1 , for example.

[0093] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 1 18 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 1 18 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).

[0094] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like. [0095] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 115/116/1 17 from a base station (e.g., base stations 1 14a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.

[0096] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.

[0097] FIG. 8C is a system diagram of the RAN 103 and the core network 106 according to an embodiment. As noted above, the RAN 103 may employ a UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 115. The RAN 103 may also be in communication with the core network 106. As shown in FIG. 8C, the RAN 103 may include Node-Bs 140a, 140b, 140c, which may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 115. The Node-Bs 140a, 140b, 140c may each be associated with a particular cell (not shown) within the RAN 103. The RAN 103 may also include RNCs 142a, 142b. It will be appreciated that the RAN 103 may include any number of Node-Bs and RNCs while remaining consistent with an embodiment.

[0098] As shown in FIG. 8C, the Node-Bs 140a, 140b may be in communication with the RNC 142a. Additionally, the Node-B 140c may be in communication with the RNC142b. The Node-Bs 140a, 140b, 140c may communicate with the respective RNCs 142a, 142b via an lub interface. The RNCs 142a, 142b may be in communication with one another via an lur interface. Each of the RNCs 142a, 142b may be configured to control the respective Node-Bs 140a, 140b, 140c to which it is connected. In addition, each of the RNCs 142a, 142b may be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions, data encryption, and the like.

[0099] The core network 106 shown in FIG. 8C may include a media gateway (MGW) 144, a mobile switching center (MSC) 146, a serving GPRS support node (SGSN) 148, and/or a gateway GPRS support node (GGSN) 150. While each of the foregoing elements are depicted as part of the core network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.

[0100] The RNC 142a in the RAN 103 may be connected to the MSC 146 in the core network 106 via an luCS interface. The MSC 146 may be connected to the MGW 144. The MSC 146 and the MGW 144 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.

[0101] The RNC 142a in the RAN 103 may also be connected to the SGSN 148 in the core network 106 via an luPS interface. The SGSN 148 may be connected to the GGSN 150. The SGSN 148 and the GGSN 150 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between and the WTRUs 102a, 102b, 102c and IP- enabled devices.

[0102] As noted above, the core network 106 may also be connected to the networks 1 12, which may include other wired or wireless networks that are owned and/or operated by other service providers.

[0103] FIG. 8D is a system diagram of the RAN 104 and the core network 107 according to an embodiment. As noted above, the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also be in communication with the core network 107.

[0104] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 1 16. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.

[0105] Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in FIG. 8D, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.

[0106] The core network 107 shown in FIG. 8D may include a mobility management gateway (MME) 162, a serving gateway 164, and a packet data network (PDN) gateway 166. While each of the foregoing elements are depicted as part of the core network 107, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.

[0107] The MME 162 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN

104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.

[0108] The serving gateway 164 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The serving gateway 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The serving gateway 164 may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.

[0109] The serving gateway 164 may also be connected to the PDN gateway 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.

[0110] The core network 107 may facilitate communications with other networks. For example, the core network 107 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the core network 107 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network 107 and the PSTN 108. In addition, the core network 107 may provide the WTRUs 102a, 102b, 102c with access to the networks 1 12, which may include other wired or wireless networks that are owned and/or operated by other service providers.

[0111] FIG. 8E is a system diagram of the RAN 105 and the core network 109 according to an embodiment. The RAN 105 may be an access service network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 117. As will be further discussed below, the communication links between the different functional entities of the WTRUs 102a, 102b, 102c, the RAN 105, and the core network 109 may be defined as reference points.

[0112] As shown in FIG. 8E, the RAN 105 may include base stations 180a, 180b, 180c, and an

ASN gateway 182, though it will be appreciated that the RAN 105 may include any number of base stations and ASN gateways while remaining consistent with an embodiment. The base stations 180a, 180b, 180c may each be associated with a particular cell (not shown) in the RAN 105 and may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 117. In one embodiment, the base stations 180a, 180b, 180c may implement MIMO technology. Thus, the base station

180a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a. The base stations 180a, 180b, 180c may also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like. The ASN gateway 182 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network 109, and the like.

[0113] The air interface 117 between the WTRUs 102a, 102b, 102c and the RAN 105 may be defined as an R1 reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs 102a, 102b, 102c may establish a logical interface (not shown) with the core network 109. The logical interface between the WTRUs 102a, 102b, 102c and the core network 109 may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.

[0114] The communication link between each of the base stations 180a, 180b, 180c may be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations 180a, 180b, 180c and the ASN gateway 182 may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 102a, 102b, 102c.

[0115] As shown in FIG. 8E, the RAN 105 may be connected to the core network 109. The communication link between the RAN 105 and the core network 109 may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example. The core network 109 may include a mobile IP home agent (MIP-HA) 184, an authentication, authorization, accounting (AAA) server 186, and a gateway 188. While each of the foregoing elements are depicted as part of the core network 109, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.

[0116] The MIP-HA may be responsible for IP address management, and may enable the WTRUs 102a, 102b, 102c to roam between different ASNs and/or different core networks. The MIP-HA 184 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The AAA server 186 may be responsible for user authentication and for supporting user services. The gateway 188 may facilitate interworking with other networks. For example, the gateway 188 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate

communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. In addition, the gateway 188 may provide the WTRUs 102a, 102b, 102c with access to the networks 1 12, which may include other wired or wireless networks that are owned and/or operated by other service providers. [0117] Although not shown in FIG. 8E, it will be appreciated that the RAN 105 may be connected to other ASNs and the core network 109 may be connected to other core networks. The communication link between the RAN 105 the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs 102a, 102b, 102c between the RAN 105 and the other ASNs. The communication link between the core network 109 and the other core networks may be defined as an R5 reference, which may include protocols for facilitating interworking between home core networks and visited core networks.

[0118] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.