Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SUBSURFACE SCATTERING FOR MOBILE APPLICATIONS
Document Type and Number:
WIPO Patent Application WO/2023/122357
Kind Code:
A1
Abstract:
Systems and methods are provided for rendering subsurface scattering. The system can receive a thickness map of an object. A dot product can be calculated based on a normal to a surface of the object and a vector directed towards a light source. An offset can be applied to the dot product. The system can determine subsurface lighting for a face of the object opposite to a shaded point of the object based on the thickness map, the dot product, and the offset. Subsurface lighting for a face of the object adjacent to the shaded point can be based on a direct illumination property of the object and the dot product. The object can be rendered in a graphical interface by additively applying all subsurface lighting for the object.

Inventors:
NING PAULA (US)
LI CHEN (US)
LI MINHAO (US)
SUN HONGYU (US)
Application Number:
PCT/US2022/054407
Publication Date:
June 29, 2023
Filing Date:
December 31, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INNOPEAK TECH INC (US)
International Classes:
G06T15/50; G06T15/60; G06T15/80
Foreign References:
US20210192838A12021-06-24
US20030231173A12003-12-18
US20050088440A12005-04-28
US20210225075A12021-07-22
Attorney, Agent or Firm:
AGDEPPA, Hector, A. (US)
Download PDF:
Claims:
Claims

What is claimed is:

1. A method for determining subsurface scattering comprising: receiving a thickness map of an object; determining a dot product of a normal to a surface of the object and a vector directed towards a light source and applying an offset to the dot product; determining subsurface lighting for a face of the object opposite to a shaded point of the object based on the thickness map, the dot product, and the offset; determining subsurface lighting for a face of the object adjacent to the shaded point based on a direct illumination property of the object and the dot product; and rendering the object in a graphical interface by additively applying all subsurface lighting for the object.

2. The method of claim 1, wherein opacity for the object is defined as an intensity scalar for subsurface lighting, and subsurface objects are differentiated from transmissive objects by a threshold opacity.

3. The method of claim 1, wherein the subsurface lighting for the face adjacent to the shaded point is zeroed if the dot product is less than or equal to zero.

4. The method of claim 1, wherein the subsurface lighting for the face opposite to the shaded point is attenuated based on a negated version of the dot product.

5. The method of claim 1, wherein the thickness map is chromatic or greyscale.

6. The method of claim 1, wherein the thickness map is computed as a UV-mapped ambient occlusion map.

7. The method of claim 1, wherein the graphical interface comprises an application of a mobile device.

8. A mobile device comprising: a processor; and a memory encoded with instructions which when executed, cause the processor to: determine a dot product of a normal to a surface of an object and a vector directed towards a light source and apply an offset to the dot product; determine subsurface lighting for a first face adjacent to a shaded point of the object and a second face opposite to the shaded point based on the dot product, the offset, a subsurface intensity for the object, and a thickness map for the object; and render the object in a graphical interface by additively applying all subsurface lighting to the object.

9. The mobile device of claim 8, wherein opacity for the object is defined as an intensity scalar for subsurface lighting, and subsurface objects are differentiated from transmissive objects by a threshold opacity.

10. The mobile device of claim 8, wherein the subsurface lighting for the face adjacent to the shaded point is zeroed if the dot product is less than or equal to zero.

11. The mobile device of claim 8, wherein the graphical interface comprises an application of a mobile device.

12. The mobile device of claim 8, wherein the offset is applied to approximate scattering of light for a surface of the object that is not illuminated.

13. The mobile device of claim 8, wherein subsurface lighting for the second face is based on a negated version of the dot product.

14. The mobile device of claim 8, wherein the object comprises a thickness map.

15. A non-transitory machine-readable medium encoded with instructions which when executed by a processor, cause the processor to: receive a thickness map of an object; determine the object meets or exceeds a threshold opacity based on an alpha channel of an albedo map for the object; determine a dot product of a normal to a surface of the object and a vector directed towards a light source and apply an offset to the dot product; determine subsurface lighting for a face opposite to a shaded point of the object based on direct illumination at the shaded point, the thickness map, the dot product, the offset, and a subsurface color of the object; determine subsurface lighting for a face adjacent to the shaded point based on the direct illumination at the shaded point, the dot product, and the offset; and render the object in a graphical interface by applying all subsurface lighting to the object.

16. The non-transitory machine-readable medium of claim 15, wherein the subsurface lighting for the face adjacent to the shaded point is zeroed if the dot product is less than or equal to zero.

17. The non-transitory machine-readable medium of claim 15, wherein the subsurface lighting for the face opposite to the shaded point is based on an absolute value of the dot product.

18. The non-transitory machine-readable medium of claim 15, wherein the thickness map is chromatic or greyscale.

19. The non-transitory machine-readable medium of claim 15, wherein the thickness map comprises a UV-mapped ambient occlusion.

20. The non-transitory machine-readable medium of claim 15, wherein the graphical interface comprises an application of a mobile device.

21

Description:
SUBSURFACE SCATTERING FOR MOBILE APPLICATIONS

Cross-Reference to Related Applications

[0001] This application claims the benefit of U.S. Provisional Application No. 63/422,246 filed November 3, 2022 and titled "REAL-TIME RAY TRACED TRANSLUCENCY AND TRANSPARENT SHADOWS FOR MOBILE," which is hereby incorporated herein by reference in its entirety. This application is also related to co-pending PCT application for Attorney Docket Number 75EP-364822-WO entitled "VOLUMETRIC TRANSPARENCY AND SHADOWS FOR MOBILE GRAPHICS", which is hereby incorporated herein by reference in its entirety.

Technical Field

[0002] The disclosed technology generally relates to methods of rendering digital images at a user device. Particularly, the disclosed technology includes methods and systems for rendering lighting and shadow effects on objects of various translucency.

Background

[0003] Image rendering quality, including more realistic simulations of real-world objects, higher resolution, or smoother framerates, is a constant goal for several technologies and applications. Trends in real-time gaming applications toward employing processor-heavy graphical manipulation techniques such as path tracing have put the pressure on innovators in this space to come up with new digital image rendering methods. Such new digital rendering methods attempt to provide the same or similar visual effects (as provided by past methods) with less demand on hardware components, while allowing users of real-time gaming applications to experience increased flexibility in tailoring their user experience to their personal preferences. Brief Description of the Drawings

[0004] The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.

[0005] FIG. 1 illustrates one embodiment of a user experience rendering system in accordance with some examples of the disclosure.

[0006] FIG. 2 illustrates example pipelines for subsurface scattering for thick and thin objects.

[0007] FIG. 3 illustrates the aspects of subsurface scattering for thick objects.

[0008] FIG. 4 illustrates the aspects of subsurface scattering for thin objects.

[0009] FIG. 5 depicts a block diagram of an example computer system in which various of the embodiments described herein may be implemented.

[0010] The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.

Detailed Description

[0011] Graphics rendering for video games continues to advance, especially in the realm of mobile devices and games. As mobile devices do not have as much processing power as desktop personal computers or game consoles, it is important to balance the quality of graphics for mobile applications with the processing requirements. Real-time application rendering is most commonly achieved through rasterization of triangular geometry. Triangular geometry is defined as a list of positions given for the vertices of each triangle within a given object. Each copy of each object has an independent matrix which is used to place and orient that particular copy in a common coordinate space known as world space. An additional matrix associated with a camera is used to convert world space coordinates to view space coordinates, and finally a projection matrix is used to project those 3D positions onto a 2D screen plane. The transformation and projection steps are typically performed in a stage known as a "vertex shader", in which a shader program is run for every vertex being processed. The rasterization pipeline is named for the hardware- accelerated rasterization step that occurs after the vertex shader, which interpolates all data defined per-vertex across all pixels within the triangles being rendered, thus converting per-vertex data to per-pixel data. Next, a "pixel shader" is executed for all pixels thus interpolated. While many rasterization pipelines have additional stages, the vertex- rasterpixel pipeline is the core of all rasterization pipelines. Volumetric transparency and shadowing are advanced graphics effects that are not ubiquitous even in desktop and console applications. Real-time transparency in rasterization pipelines is most commonly achieved through a method known as alpha blending: that is, objects have either a fixed or texture-mapped alpha value which is used to attenuate the output of the fragment shader for simple blending with the background color. This method achieves a flat, less realistic appearance for transparent objects and requires separate sorting and rendering of transparent objects in order to correctly render their overlaps with other objects. Ray tracing, an alternative method of rendering that computes the intersection of rays with objects, does not require separate rendering of transparent objects and facilitates improved transparent rendering methods. Since ray tracing is much more expensive than rasterization, it is ideal to apply it only to problems that are poorly solved by rasterization, such as rendering transparent objects.

[0012] Examples of the disclosed technology assist in rendering subsurface scattering for objects rendered in a mobile application. In particular, embodiments can provide a pipeline for determining subsurface scattering for thick and thin objects. Subsurface scattering refers to radiance emitted from a given surface point that is transported to that point from below that surface rather than from light directly incident on the point. Subsurface scattering creates the effect of light shining through a partially translucent material. For example, if a human hand is placed in front of a flashlight, the flashlight will partially shine through the skin due to the partial translucency. Rendering this effect for objects provides a more realistic and three-dimensional appearance of more opaque translucent materials such as skin, wax, and marble.

[0013] The embodiments described herein involve three-dimensional models of objects as they are generated in a graphics environment. The generated three-dimensional model can involve the appearance of light and shadow interactions with the model based on a theoretical light source. Subsurface scattering can be a rendered effect on the three- dimensional model. A representation of this phenomenon can be generated based on geometric properties of the object in addition to material parameters. The end result is a three-dimensional model that appears to interact with light in a three-dimensional manner.

[0014] In particular, subsurface scattering at a particular point can be approximated using a dot product of the object's normal and the direction towards the light, the dot product referred to herein as NdotL. The object's normal represents the perpendicular vector at the shaded point of an object's surface. The direction towards the light represents the vector directed from the point on the surface to the position of the light source or the position of a sample on the light source. NdotL can be applied alongside a thickness map of the object to approximate the contribution of that light source from all surfaces in a small radius around the shaded point. The embodiments described herein apply NdotL to introduce a dependency on light direction to the otherwise lighting-agnostic thickness map values. Additionally, the NdotL value is offset by a fixed amount in order to cheaply simulate subsurface scattering from neighboring surface points. The thickness map, on the other hand, is necessary to approximate subsurface scattering from nearby faces oriented in the opposite direction of the shading point, since these contributions would not be accounted for when simply offsetting NdotL. Using these approximations does not require as much processing power because of the simplicity of the calculation. In contrast, applying ray tracing would involve casting sampling rays many times to dynamically calculate the distance to a neighboring point (which may be considered a sample of local thickness) and the incident radiance at the neighboring point (which the embodiments herein estimate from the incident radiance at the original point). Applying the full solution to subsurface scattering to every point of an object's surface exponentially increases the number of computations. Therefore, the embodiments described herein are more suitable for a mobile device and application to facilitate realistic graphics features while preserving processing capacity.

[0015] FIG. 1 illustrates one embodiment of a user rendering system 100 in accordance with some examples of the disclosure. The user rendering system 100 comprises at least machine readable media 102, processor 104, input interface 106, display interface 108, and communication interface 110. Machine readable media 102 may comprise any form of information storage (e.g., random-access memory ("RAM"), read only memory ("ROM"), flash drives, processor caches), and covers both static and dynamic storage as well as long term and short term storage. Some of the information stored on machine readable media 102 may be categorized as rendering modules 112 and/or program data 114. Rendering modules may include operating system component 116, application program component 118, and graphical application component 120 (described further below). Program data 114 may refer to any collection of data input to and/or output by the processor 104 when executing rendering modules 112. Program data 114 may include operating system data 122, application data 124, and graphical application data 126.

[0016] Processor 104 may refer to one or more general purpose processors (e.g. microprocessors) and/or one or more special purpose processors (e.g., GPUs, network processors, or ASICs). Further, in embodiments where multiple processors are represented by processor 103, said processors may be operated in parallel so that multiple instruction sets may be simultaneously executed by processor 104. Input device 106 may refer to any device with which a user may interact (e.g. a smart phone or other touch screen device), which converts such interactions into signals interpretable by processor 104. Display device 108 may refer to any device which may output a visual experience to a user (e.g. a smartphone screen, a liquid crystal display ("LCD"), a light emitting diode ("LED")). FIG. 1 depicts display device 108 as a mobile touch screen, however other embodiments of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of display devices. In addition, display device 108 is depicted here as relating to only one display device, however display device 108 may also refer to one or more additional display devices (e.g., LEDs, attached LCD screens) that may be operated simultaneously with the mobile device's screen. Communication interface 110 may refer to one or more devices that allow processor 104 to communicate with components not located locally with processor 104, and/or one or more devices that allow for instructions and/or data to be sent from machine readable media 102 over a network. Communication interface 110 may include a modem or soft modem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 502.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface.

[0017] Referring now to executable rendering module 112 and program data 114 present on machine readable media 102, operating system 116 manages the various hardware and software components of the system and provides common interfacing services. Operating system 116 may include any known operating system available or may be custom written for the system. Application program 118 may include one or more software programs meant to perform a function aside from rendering a user experience (e.g., email, phone, internet browser). Application data 124 may refer to data related to the functions performed by application program 118 (e.g., email address book, phone number contact list, bookmarked webpage list).

[0018] Graphical application 120 may include any software program meant to output one or more rendered images. When executing instructions relating to graphical application 120, processor 104 may read input from and/or output to graphical application data 126. Graphical application data 126 may refer to data related to the content visually displayed to a user (e.g., simulated character settings, login account profile, simulated environment objects).

[0019] Graphical application 120 can comprise thick object subsurface scattering module 132 and thin object subsurface scattering module 134. Thick object subsurface scattering module 132 can apply a pipeline, described below in FIG. 2, to render subsurface scattering. Thick objects are differentiated from thin objects by reusing the index of refraction (IOR) material parameter. Objects with thick subsurface scattering have IOR greater than 0, and are distinguished from transmissive objects by having an opacity greater than a global alpha cutoff parameter. Objects with thin subsurface scattering have an IOR equal to 0, and are distinguished from alpha cutoff objects without subsurface scattering by having a subsurface scattering parameter greater than 0. Examples of objects with thick subsurface scattering can be skin on human figures, and objects sculpted of marble or wax. Thin object subsurface scattering module 134 can apply a similar pipeline, described below in FIG. 2, to render subsurface scattering. Examples of thin objects with subsurface scattering can comprise foliage or paper. Depending on the desired appearance of the object, the pipelines can be applied to render subsurface scattering accordingly. The subsurface parameter, the index of refraction, and the opacity of the material applied to a given object determine whether subsurface scattering is rendered for the object, and if so, which version of subsurface scattering is applied. Opacity may be applied through an albedo map texture or configured as a constant value for a given material. The subsurface parameter is always a constant value for a given material, but when it is greater than 0, the emission texture is interpreted as subsurface color instead of emission. The Index of Refraction parameter is configured as a constant value for a given material. Modules 128 and 130 are further described in PCT Application for 75EP-364822, the contents of which are incorporated in its entirety herein.

[0020] FIG. 2 illustrates an example pipelines 200 and 220 for subsurface scattering as applied to both thick and thin objects respectively. For thick objects, pipeline 200 can comprise applying a thickness map. This thickness map can comprise a texture attachment to the object model comprising a UV-mapped texture containing the internal ambient occlusion of the mesh. The thickness map is computed offline using the UV mapping from the albedo texture to generate an ambient occlusion value at all pixels on the UV map. The ambient occlusion value is generated by casting a configurable number of rays with a configurable max length in a stochastically selected direction offset from the negated geometric normal of the mesh. Using a negated normal orients the ray direction to cast inside the interior of the mesh. Then, cosine-weighted hemisphere sampling is used to perturb the negated normal in a random direction. Finally, the distance to the nearest hit is used to compute an average occlusion value, where occlusion is proportional to hit distance, with a miss being given the max occlusion value of 1. Since occlusion takes smaller values in more occluded areas, it may be used directly as a parameterization of local thickness. Then, when rendering subsurface scattering, one minus the local thickness can be multiplied with the negated offset NdotL value to obtain an approximation for the intensity of subsurface scattering effects. An example code for implementing the thickness map is as follows: float nDotL = dot(N, L); vec3 shadowedlllumination = directlllumination * shadow; float subsurfaceAmount = 1.0 - localThickness; vec3 surfaceLighting = shadowedlllumination * max(nDotL, 0.0); surfaceLighting += shadowedlllumination * subsurfaceColor * clamp(abs(nDotL) + NDOTLJDFFSET), 0.0, 1.0); // region 2 surfaceLighting += directlllumination * subsurfaceColor * subsurfaceAmount * min(nDotL, 0.0); // region 1

[0021] As illustrated in the example code above, the thickness map is applied to a subsurface property that illustrates the intensity. This subsurface property can be multiplied by the negated offset value of NDotL to determine the surface lighting at a particular point.

[0022] At block 204, the system can determine shadowed illumination. Shadowed illumination is the contribution of direct illumination of the shaded point from a given light, attenuated by shadowing from objects between the shaded point and the light. This contribution is already calculated at each hit point in the normal course of ray tracing, so the system simply reuses that value. In order to calculate shadowed direct illumination, direct illumination is first calculated using a product of the BSDF (which typically includes an NdotL term) with the light radiance and divided by the sampling probability distribution function used to select the light. Next, a shadow ray is cast in the direction of the light, and if that ray encounters any objects, the direct illumination is attenuated based on the global shadow mode and the material properties of the shadowing objects.

[0023] At block 206, the system can apply an fixed offset to the NdotL value used for computing direct illumination. This reduces attenuation due to surface curvature by a fixed amount, which effectively emulates the subsurface scattering due to direct illumination from adjacent surface points facing in the same direction as the shaded point. A fully ray traced subsurface scattering implementation achieves this by randomizing the origins of ray casts in an area around the shaded point, casting rays to find corresponding surface points, and evaluating direct illumination at each intersection. This method saves subsurface ray casts, additional direct illumination evaluations, and also the shadow ray casts used to determine shadowing for the neighboring points.

[0024] At block 208, the system uses the thickness map to approximate the subsurface contribution from nearby faces that are oriented in the opposite direction as the point being shaded. Since these faces have normals oriented opposite the shaded normal, the offset NdotL term will still be fully attenuated to 0. Simply negating the offset NdotL value will produce unwanted subsurface scattering on the back faces of thick regions of the object, such as the back face of a sphere. However, multiplying the negated NdotL value with one minus the thickness map will limit the subsurface contribution to regions of the object with local thicknesses small enough for subsurface scattering to realistically impact appearance. This method differs from fully featured subsurface scattering in two ways. By loading local thickness from a texture, this method avoids casting randomly selected rays to estimate local thickness. By reusing the offset NdotL value and the radiance at the shaded point, this method skips evaluating direct illumination separately for neighboring points. Finally, this method supplements the adjacent-face subsurface scattering from the original NdotL offsetting with opposite-face subsurface scattering, cheaply approximating the appearance of subsurface scattering across the entire surface of a thick object. [0025] Pipeline 220 can be applied for thin objects as defined above. Pipeline 220 applies similar steps as in pipeline 200 with some distinctions. In block 204, the system can determine shadowed illumination as described above. At block 206, the system can determine the offset to replace the offset for attenuation. However, instead of calculating the same surface lighting as with thick objects, pipeline 220 applies block 210 to determine "translucent color". Implementations may reuse the emission material property as subsurface color for materials with subsurface intensity greater than 0. Foliage assets are commonly provided with subsurface maps in addition to albedo maps, and these maps may be applied via the existing emission map property. In order to properly transmit illumination from front faces to back faces, the absolute value of NdotL is used to attenuate the subsurface color instead of the existing NdotL value, which clamps negative values to 0. An example code is illustrated below. The shadow term is the same shadow term calculated via shadow rays during normal direct illumination computations. float nDotL = dot(N, L); vec3 shadowedlllumination = directlllumination * shadow; vec3 color = shadowedlllumination * max(nDotL, 0); color += subsurfaceEmission * subsurfaceintensity * shadowedlllumination * clamp(abs(nDotL) + NDOTLJDFFSET, 0, 1)

[0026] As illustrated above, the translucent color code incorporates similar terms as determined in pipeline 200, but applies a different application of the NdotL property to better reflect the translucency of thinner objects. At block 212, either of thin- or thickobject subsurface scattering is added to the shaded color in order to complete rendering of the object.

[0027] FIG. 3A and 3B illustrates the application of subsurface scattering effects to a thick object as outlined in pipeline 200. In FIG. 3A, thickness map 302 is overlayed on the object to demonstrate the relative thickness for a particular point 300. Originating from point 300 are vectors N and L. As described above, N represents the vector perpendicular to the surface at point 300. L represents the vector directed towards the light source 304. FIG. 3B illustrates the resulting appearance. Subsurface scattering from adjacent points contribute to faces 306 and 308 as the object would receive NdotL-attenuated lighting from light source 304 at an adjacent surface point. Subsurface scattering from opposite-face points can be applied to face 310 as face 310 faces away from light source 304 and so NdotL is negative across the entire face. FIGs 3A and 3B are illustrated using three-dimensional ellipsoid cylinders; however, whether subsurface scattering from adjacent versus opposite faces dominates depends on the overall shape of the object and the orientation of the object relative to the light source.

[0028] FIG. 4 illustrates the application of subsurface scattering effects to a thin object as outlined in pipeline 220. Here, the relative point on the object's surface is point 400, which involves similar N and L vectors directed towards light source 402. Point 400 receives both direct illumination and a subsurface contribution approximated by the product of direct illumination with subsurface color. Object face 404 does not receive direct illumination, since it is on the back face of the object. However, a subsurface contribution is still applied by computing the direct illumination as if it was on the face of the object pointed towards the light, and then multiplying that value by the subsurface color. The coloring for face 404 can be determined based on the calculations described in pipeline 220.

[0029] FIG. 5 depicts a block diagram of an example computer system 500 in which various of the embodiments described herein may be implemented. The computer system 500 includes a bus 502 or other communication mechanism for communicating information, one or more hardware processors 504 coupled with bus 502 for processing information. Hardware processor(s) 504 may be, for example, one or more general purpose microprocessors.

[0030] The computer system 500 also includes a main memory 506, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 502 for storing information and instructions to be executed by processor 504. Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Such instructions, when stored in storage media accessible to processor 504, render computer system 500 into a special-purpose machine that is customized to perform the operations specified in the instructions.

[0031] The computer system 500 further includes a read only memory (ROM) 508 or other static storage device coupled to bus 502 for storing static information and instructions for processor 504. A storage device 510, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 502 for storing information and instructions.

[0032] The computer system 500 may be coupled via bus 502 to a display 512, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 514, including alphanumeric and other keys, is coupled to bus 502 for communicating information and command selections to processor 504. Another type of user input device is cursor control 516, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on display 512. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.

[0033] The computing system 500 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.

[0034] In general, the word "component," "engine," "system," "database," data store," and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.

[0035] The computer system 500 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 500 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 500 in response to processor(s) 504 executing one or more sequences of one or more instructions contained in main memory 506. Such instructions may be read into main memory 506 from another storage medium, such as storage device 510. Execution of the sequences of instructions contained in main memory 506 causes processor(s) 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.

[0036] The term "non-transitory media," and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 510. Volatile media includes dynamic memory, such as main memory 506. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.

[0037] Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 502. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.

[0038] The computer system 500 also includes a communication interface 518 coupled to bus 502. Communication interface 518 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 518 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 518 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.

[0039] A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet." Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 518, which carry the digital data to and from computer system 500, are example forms of transmission media.

[0040] The computer system 500 can send messages and receive data, including program code, through the network(s), network link and communication interface 518. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 518.

[0041] The received code may be executed by processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution.

[0042] Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. [0043] As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 500.

[0044] As used herein, the term "or" may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, "can," "could," "might," or "may," unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.

[0045] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as "conventional," "traditional," "normal," "standard," "known," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.