Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VOLUMETRIC TRANSPARENCY AND SHADOWS FOR MOBILE GRAPHICS
Document Type and Number:
WIPO Patent Application WO/2023/132971
Kind Code:
A1
Abstract:
Systems and methods are provided for determining an object's volumetric transparency and transparent shadows. The system can determine the object's volumetric transparency and transparent shadows using the object's attenuation. The object's front face attenuation can be determined using the object's roughness. The object's back face attenuation can be determined using the object's opacity and the distance between the back face intersection and the front face intersection. The object can be rendered in a graphical interface with volumetric transparency and/or transparent shadows.

Inventors:
NING PAULA SIHONG (US)
LI CHEN (US)
LI MINHAO (US)
SUN HONGYU (US)
Application Number:
PCT/US2022/054384
Publication Date:
July 13, 2023
Filing Date:
December 30, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INNOPEAK TECH INC (US)
International Classes:
G06T15/08; G06T15/60; G06T15/06
Foreign References:
CN108564646A2018-09-21
US20200051312A12020-02-13
JP2022519505A2022-03-24
EP3406076B12022-08-10
Other References:
BARTA PÁL, KOVÁCS BALÁZS, SZÉCSI LÁSZLÓ, SZIRMAY-KALOS LÁSZLÓ: "Order Independent Transparency with Per-Pixel Linked Lists", PROCEEDINGS OF CESCG 2011: THE 15TH CENTRAL EUROPEAN SEMINAR ON COMPUTER GRAPHICS, 1 January 2011 (2011-01-01), XP093078912
Attorney, Agent or Firm:
AGDEPPA, Hector, A. (US)
Download PDF:
Claims:
Claims

What is claimed is:

1. A method for determining volumetric transparency comprising: determining an object's front face attenuation using a roughness parameter associated with the object; determining an object's back face attenuation using an opacity parameter associated with the object; computing backscattering of light at front and back faces of the object using the front face attenuation and the back face attenuation; defining a spectral transmission parameter for the object using the back face attenuation; and rendering the object in a graphical interface by applying the backscattering of light and the spectral transmission parameter.

2. The method of claim 1, wherein the object's front face attenuation is determined based on a minimum thickness parameter associated with the object.

3. The method of claim 1, wherein the object's back face attenuation is determined based on a minimum ray length parameter and roughness parameter associated with the object.

4. The method of claim 3, wherein the distance between a ray's entry point and the ray's exit point on the object is used to approximate volumetric absorption for the object.

5. The method of claim 1, wherein computing the backscattering of light is based on an albedo parameter used as an approximation of color for the backscattering of light.

6. The method of claim 5, wherein the spectral transmission parameter is based on a subtractive inverse of the albedo parameter.

7. The method of claim 5, wherein transmitted radiance of light through the object is proportional to a product of a subtractive inverse of the albedo and the back face attenuation.

8. A mobile device comprising: a processor; and a memory encoded with instructions which when executed, cause the processor to: determine an object's front face attenuation using a roughness parameter associated with the object; determine an object's back face attenuation using an opacity parameter associated with the object; define a transparency of shadows parameter for the object using a total attenuation parameter associated with the object and a number of hits a shadow ray of the object can register, wherein the total attenuation parameter comprises the object's front face attenuation and the object's back face attenuation; and render the object in a graphical interface by applying the transparency of shadows parameter.

9. The mobile device of claim 8, wherein the object's front face attenuation is determined based on a minimum thickness parameter associated with the object.

10. The mobile device of claim 8, wherein the object's back face attenuation is determined based on a ray length parameter associated with the object.

11. The mobile device of claim 10, wherein the ray length parameter is approximated as a distance between a ray's entry point and the ray's exit point.

12. The mobile device of claim 8, wherein the object's front face attenuation and the object's back face attenuation are determined using Fresnel reflectance.

13. The mobile device of claim 12, wherein the object's front face attenuation and the object's back face attenuation is approximated using an absolute value of a dot product of a surface normal of the object with a shadow ray direction for the object.

14. The mobile device of claim 8, wherein the number of hits a shadow ray of the object can register is set based on user input.

15. A user interface for a mobile device comprising a non-transitory machine-readable medium encoded with instructions which when executed by a processor, cause the processor to: determine an object's front face attenuation using a roughness parameter associated with the object; determine an object's back face attenuation using an opacity parameter associated with the object; define volumetric transparency by defining a backscattering of light parameter for the object using a total attenuation parameter associated with the object and a light absorption parameter associated with the object and defining a spectral transmission parameter for the object using the total attenuation parameter and the backscattering of light parameter, wherein the total attenuation parameter comprises the object's front face attenuation and the object's back face attenuation; define transparent shadows for the object using the total attenuation parameter and a number of hits a shadow ray of the object can register; and render the object in a graphical interface by applying the volumetric transparency and transparent shadows.

16. The user interface of claim 15, wherein the object's front face attenuation is determined based on a minimum thickness parameter associated with the object.

17. The user interface of claim 15, wherein the object's back face attenuation is determined based on an opacity parameter associated with the object.

18. The user interface of claim 17, wherein the ray length parameter is approximated as a distance between a ray's entry point and the ray's exit point.

19. The user interface of claim 15, wherein the object's front face attenuation and the object's back face attenuation are determined using Fresnel reflectance.

20. The user interface of claim 15, wherein all parameters are generated from user input.

Description:
VOLUMETRIC TRANSPARENCY AND SHADOWS FOR MOBILE GRAPHICS

Cross-Reference to Related Applications

[0001] This application claims the benefit of U.S. Provisional Application No. 63/422,246 filed November 3, 2022 and titled "REAL-TIME RAY TRACED TRANSLUCENCY AND TRANSPARENT SHADOWS FOR MOBILE," which is hereby incorporated herein by reference in its entirety. This application is also related to co-pending PCT for Attorney Docket Number 75EP-368078-WO and titled "SUBSURFACE SCATTERING FOR MOBILE APPLICATIONS".

Technical Field

[0002] The disclosed technology generally relates to methods of rendering digital images at a user device. Particularly, the disclosed technology includes methods and systems for rendering lighting and shadow effects on objects of various translucency.

Background

[0003] Image rendering quality, including more realistic simulations of real-world objects, higher resolution, or smoother framerates, is a constant goal for several technologies and applications. Trends in real-time gaming applications toward employing processor heavy graphical manipulation techniques such as path tracing have put the pressure on innovators in this space to come up with new methods which provide the same or similar visual effects with less demand on hardware components while allowing users of real-time gaming applications increased flexibility in tailoring their user experience to their personal preferences. Brief Description of the Drawings

[0004] The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.

[0005] FIG. 1 illustrates one embodiment of a user experience rendering system in accordance with some examples of the disclosure.

[0006] FIG. 2 illustrates example pipelines for rendering volumetric transparency and transparent shadows.

[0007] FIG. 3 illustrates the aspects of light hitting an object as applied to volumetric transparency.

[0008] FIG. 4 illustrates the aspects of light hitting an object with resulting shadows as applied to volumetric shadowing.

[0009] FIG. 5 illustrates an example user interface for setting an object's volumetric transparency and transparent shadows.

[0010] FIG. 6 depicts a block diagram of an example computer system in which various of the embodiments described herein may be implemented.

[0011] The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.

Detailed Description

[0012] Graphics rendering for video games continues to advance, especially in the realm of mobile devices and games. As mobile devices do not have as much processing power as desktop personal computers or game consoles, it is important to balance the quality of graphics for mobile applications with the processing requirements. Real-time application rendering is most commonly achieved through rasterization of triangular geometry. Triangular geometry is defined as a list of positions given for the vertices of each triangle within a given object. Each copy of each object has an independent matrix which is used to place and orient that particular copy in a common coordinate space known as world space. An additional matrix associated with a camera is used to convert world space coordinates to view space coordinates, and finally a projection matrix is used to project those 3D positions onto a 2D screen plane. The transformation and projection steps are typically performed in a stage known as a "vertex shader", in which a shader program is run for every vertex being processed. The rasterization pipeline is named for the hardware- accelerated rasterization step that occurs after the vertex shader, which interpolates all data defined per-vertex across all pixels within the triangles being rendered, thus converting per-vertex data to per-pixel data. Next, a "pixel shader" is executed for all pixels thus interpolated. While many rasterization pipelines have additional stages, the vertex- rasterpixel pipeline is the core of all rasterization pipelines. Volumetric transparency and shadowing are advanced graphics effects that are not ubiquitous even in desktop and console applications. Real-time transparency in rasterization pipelines is most commonly achieved through a method known as alpha blending: that is, objects have either a fixed or texture-mapped alpha value which is used to attenuate the output of the fragment shader for simple blending with the background color. This method achieves a flat, less realistic appearance for transparent objects and requires separate sorting and rendering of transparent objects in order to correctly render their overlaps with other objects. Ray tracing, an alternative method of rendering that computes the intersection of rays with objects, does not require separate rendering of transparent objects and facilitates improved transparent rendering methods. Since ray tracing is much more expensive than rasterization, it is ideal to apply it only to problems that are poorly solved by rasterization, such as rendering transparent objects.

[0013] Examples of the disclosed technology assist in rendering volumetric transparency and shadows during rasterization by applying the Beer-Lambert law as a visual approximation. The embodiments described herein involve three-dimensional models of objects as they are generated in a graphics environment. The generated three-dimensional model can involve the appearance of light and shadow interactions with the model based on a theoretical light source. On the front face of the object, light from the source can hit the object and scatter over one or more bounces towards the viewing camera. A representation of this phenomenon can be generated based on properties of the object. On the interior back face of the object, light from the source can be attenuated based on the travel distance from the front face of the object to the back face. A representation of this phenomenon can also be generated based on properties of the object. The end result is a three-dimensional model that appears to interact with light in a three-dimensional manner. Embodiments use the Beer-Lambert law to take advantage of a generated model's physical properties so as to avoid traying a ray of light incrementally as it reaches and passes through an object. The Beer-Lambert law relates the attenuation of light to the properties of the traveling material. The law is represented as

A = E-£C where A is the attenuation of light, E is the absorptivity of the object, £ is the optical path length of the light, and c is the concentration of the material. This attenuation can be applied to relate the transmitted spectral radiant power to the incident spectral radiant power and absorption path:

P t = P t * 10“ where P t is the transmitted spectral radiant power and P t is the incident spectral radiant power. Embodiments of the disclosed technology can reformulate the Beer-Lambert law for ray tracing by assuming that attenuation is proportional to the ratio of the transmitted to incident power, such that full attenuation has a value of 0 and no attenuation has a value of 1. The product of the absorption coefficient e with the concentration c can be substituted with a material parameter that is inversely proportional to attenuation: roughness for frontface hits, and opacity for back-face hits. The path length I can be substituted with a minimum object thickness at front-face ray hits, and the ray length at back-face ray hits.

Attenuation can then be approximated as

Front Face Attenuation = io-( rou g hness * thin ob i ect thickness )

Back Face Attenuation = lQ _ (°P acit y* ra y len s th )

[0014] Roughness refers to the texture of the surface that results in irregularities such as bumps, divots, etc. This is a critical property in the context of graphics rendering as the roughness dictates how light scatters on and passes through the surface of an object, particularly when considering multiscattering effects. Multiscattering describes interactions with the surface where light bounces more than one time across the microfacets of an intersection point before casting towards the camera. Roughness can be considered for the front-face of an object (i.e. the surface where the light first hits the object) to approximate multiscattering. Multiscattering can be approximated as an attenuation to the singlescattering intensity proportional to roughness. Since multiscattering events only occur at the boundaries of a transparent geometry, they should not be proportional to path length, so they can instead be approximated with a constant path length. For simplicity and performance, multiscattering at the back face is ignored. Instead, at the back-face of the object, where the light passes through and leaves the object, opacity can be treated as an abstraction of the path-length-proportional absorptive properties of the material. Since the index of refraction affects the path length from the front to the back face, considering path length alone is sufficient to provide the rendered image with a more three-dimensional look. Similar to multiscattering, backscattering at the back face is a measurement of the reflection of light radiance towards the camera. The backscattering component can be approximated from attenuation using less computations than a physically-based volumetric scattering model as backscattering = (1 — attenuation) * albedo [0015] where albedo is a description of the object's backscattering color. Spectral transmission through an object (such as described by Mie scattering or the Tyndall effect) may be approximated as follows: spectral transmission = lerp(l — albedo3, albedo3, attenuation) * attenuation

[0016] The spectral transmission can be calculated such that the remaining transmitted spectra is made up of wavelengths that were not backscattered. As a ray is further attenuated, the transmitted color can be linearly interpolated away from the initial albedo and towards the subtractive inverse of the albedo. Multiplication with the total attenuation can ensure energy conservation between the color transmitted and the color backscattered.

[0017] In terms of shadows, approximations are similar with the exception that they are applied to shadow rays instead of path rays without the backscattering. Physically correct simulation of transparent shadows requires refracting shadow rays through transparent objects in order to obtain correct refractive detail in the transparent shadows. However, we observe that even when maintaining the original shadow ray direction, applying path-length-proportional attenuation already significantly improves the appearance of transparent shadows. This is significant for inline ray tracing, where a new in-line ray tracing query must be initialized whenever the ray direction is changed. Alternatively, for shadows, attenuation can be approximated using Fresnel Attenuation. The Fresnel term describes the reflection and transmission of light when incident on an interface between different optical media. Fresnel attenuation uses the Fresnel term to add geometric surface detail to transparent shadows. Embodiments of the disclosed technology can apply the above techniques to make objects appear more detailed while preserving computation time in a user rendering system.

[0018] FIG. 1 illustrates one embodiment of a user rendering system 100 in accordance with some examples of the disclosure. The user rendering system 100 comprises at least machine readable media 102, processor 104, input interface 106, display interface 108, and communication interface 110. Machine readable media 102 may comprise any form of information storage (e.g., random-access memory ("RAM"), read only memory ("ROM"), flash drives, processor caches), and covers both static and dynamic storage as well as long term and short term storage. Some of the information stored on machine readable media 102 may be categorized as rendering modules 112 and/or program data 114. Rendering modules may include operating system component 116, application program component 118, and graphical application component 120 (described further below). Program data 114 may refer to any collection of data input to and/or output by the processor 104 when executing rendering modules 112. Program data 114 may include operating system data 122, application data 124, and graphical application data 126.

[0019] Processor 104 may refer to one or more general purpose processors (e.g. microprocessors) and/or one or more special purpose processors (e.g., GPUs, network processors, or ASICs). Further, in embodiments where multiple processors are represented by processor 103, said processors may be operated in parallel so that multiple instruction sets may be simultaneously executed by processor 104. Input device 106 may refer to any device with which a user may interact (e.g. a smart phone or other touch screen device), which converts such interactions into signals interpretable by processor 104. Display device 108 may refer to any device which may output a visual experience to a user (e.g. a smartphone screen, a liquid crystal display ("LCD"), a light emitting diode ("LED")). FIG. 1 depicts display device 108 as a mobile touch screen, however other embodiments of user experience rendering system 100 may be implemented on a different platform which lends itself to other types of display devices. In addition, display device 108 is depicted here as relating to only one display device, however display device 108 may also refer to one or more additional display devices (e.g., LEDs, attached LCD screens) that may be operated simultaneously with the mobile device's screen. Communication interface 110 may refer to one or more devices that allow processor 104 to communicate with components not located locally with processor 104, and/or one or more devices that allow for instructions and/or data to be sent from machine readable media 102 over a network. Communication interface 110 may include a modem or soft modem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 602.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface.

[0020] Referring now to executable rendering module 112 and program data 114 present on machine readable media 102, operating system 116 manages the various hardware and software components of the system and provides common interfacing services. Operating system 116 may include any known operating system available or may be custom written for the system. Application program 118 may include one or more software programs meant to perform a function aside from rendering a user experience (e.g., email, phone, internet browser). Application data 124 may refer to data related to the functions performed by application program 118 (e.g., email address book, phone number contact list, bookmarked webpage list).

[0021] Graphical application 120 may include any software program meant to output one or more rendered images. When executing instructions relating to graphical application 120, processor 104 may read input from and/or output to graphical application data 126. Graphical application data 126 may refer to data related to the content visually displayed to a user (e.g., simulated character settings, login account profile, simulated environment objects).

[0022] Graphical application 120 can comprise volumetric transparency module 128 for rendering volumetric transparency. Volumetric transparency refers to the following physical effects: 1) light attenuation as it propagates through a volume, 2) backscattering of light as it propagates through a volume, 3) multiscattering at geometric boundaries, and 4) spectral absorption and transmission of light colors. Volumetric transparency module 128 can apply the techniques introduced above to render backscattering and multiscattering of light as it hits three-dimensional objects. Similarly, volumetric transparent shadow module 130 can be applied to render three-dimensional shadows in a mobile application. As described further below, volumetric transparent shadow module 130 can apply a pipeline similar to that of volumetric transparency module 128 to apply attenuation to shadow rays. Thick object subsurface scattering module 132 and thin object subsurface scattering module 134 can be used to render subsurface scattering of translucent objects that are thick and thin respectively. Modules 132 and 134 are further described in Attorney Docket No. 75EP- 368078, the contents of which are incorporated in its entirety herein.

[0023] FIG. 2 illustrates an example pipeline that can be applied to render a three- dimensional object using volumetric transparency and shadows. Pipeline 200 can be applied to render objects using volumetric transparency and/or shadows. At block 202, the system can approximate attenuation. As mentioned above, attenuation can be illustrated as

A = E-£C where A is the attenuation of light, E is the absorptivity of the object, £ is the optical path length of the light, and c is the concentration of the material. The system can substitute the product of the absorption coefficient e with the concentration c for roughness for front-face hits, and opacity for back-face hits. Attenuation can then be approximated as

Front Face Attenuation = io-( rou g hness * thin ob i ect thickness )

Back Face Attenuation = lQ _ (°P acit y* ra y len s th )

[0024] Thin_object_thickness is a non-physical, user-tuned parameter. This thickness is used as a minimum thickness of all transparent objects, and is most intuitively used to describe the thickness of a pane of glass represented using a single plane instead of a thin cube. The ray length is the travel distance of the ray between the front face intersection and the back face intersection, and only applies to 3D meshes (such as the thin- cube representation of a windowpane). At block 204, the object's roughness can be used to approximate the front face attenuation from multiscattering effects, while at block 206, the object's opacity can be used to approximate back face attenuation from volumetric extinction. Using these two material properties, the attenuation can be approximated in only two ray intersections and without computationally intensive simulation of physical phenomena. Once attenuation is determined at each face, the system can use that attenuation directly to compute transparent shadows or straightforwardly compute backscattering from attenuation for rendering of the object itself.

[0025] Once attenuation has been computed, the system can compute backscattering at block 208. As mentioned above, roughness is considered for front-face hits to approximate multiscattering on rough surfaces. Multiscattering can be approximated as an attenuation to the single-scattering intensity proportional to roughness. At the back- face, opacity can comprise an abstraction of the path-length-proportional absorptive properties of the material. As a result, the backscattering component can be approximated using less computations as backscattering = (1 — attenuation) * albedo where albedo is a description of the object's color. Albedo is a common material property that technically refers specifically to the surface color of an object under white light at an intensity of 1. In practice, its meaning has expanded to refer to the dominant color of an object under white light. In this context, albedo is a statement of artist intent, and thus transparent objects are rendered such that their color most closely matches the object's albedo. For smooth glass objects, this means attenuating refracted indirect illumination with the object's albedo. To simulate spectral absorption and transmission effects, the albedo is treated as the backscattering color, with the reasoning that when configuring the Tyndall effect in opalite, artists would expect to set a light blue color to express the light blue color of the glass, while the code infers the subtractive transmitted color implicitly. At block 210, spectral transmission can be computed. Spectral transmission refers to the translucence of an object and how light passes through the project. Spectral transmission can be computed as spectral transmission = lerp(l — albedo3, albedo3, attenuation) * attenuation

This computation assumes that the remaining transmitted spectra is made up of wavelengths that were not backscattered. As a ray is further attenuated, the transmitted color is linearly interpolated away from the initial albedo and towards the subtractive inverse of the albedo. At block 212, this computation can be repeated up to a max specular/diffuse number of hits, that limits the number of rays that can be cast along a path associated with a given pixel.

[0026] When casting shadow rays, the system can calculate the transparent shadows at block 214 the same methods for calculating attenuation as are used when rendering the objects directly. At block 216, as an alternative, Fresnel attenuation can approximate attenuation due to Fresnel reflectance reflecting away some of incident light at each hit. While the full Fresnel term may be calculated, for shadowing it is visually sufficient to attenuate the shadow with the absolute value of the dot product of the surface normal and the shadow ray direction. Due to the performance criticality of shadow rays, most of the transparency settings can duplicated for transparent shadows, so that they may be configured separately from transparent rendering to achieve the tradeoff between appearance and performance best suited to the user's application. At block 218, a Max Hits parameter can be set by the user or automatically by the system. The Max Hits setting can limit the number of hits a shadow ray can register, so that regions with many overlapping transparent objects avoid exceeding a maximum cost. At block 220, the three-dimensional object can be rendered with the computed backscattering, spectral transmission, and/or transparent shadows.

[0027] FIG. 3 illustrates volumetric transparency as applied to a general three- dimensional object. As illustrated, light rays 301 and 302 from light source 300 can strike an object at each intersection along a path in order to directly illuminate the surface at each intersection. As illustrated at the end of light ray 301, the light can illuminate each intersection through both single scattering and multi-scattering events as illustrated in figure 304. BSDF-based estimates of direct illumination account for only the contribution of single scattering events. The contribution of multiscattering events is approximated in this system based on roughness and a configurable minimum thickness. This is much cheaper than the offline method of casting rays to directly simulate multiscattering events within a microsurface and evaluating the BSDF at each bounce. Next, the transmitted ray is refracted through the transparent object. At the back-face intersection, attenuation due to volumetric absorption is approximated from interior path length 305.

[0028] FIG. 4 illustrates volumetric transparent shadows as applied to a general three-dimensional object. As illustrated, a shadow ray path directed towards light source 400 is not refracted, but otherwise the attenuation events along the path are calculated in the same fashion as the attenuation events along light paths, illustrated at path 402. FIG. 4 illustrates the shadow 404 coming off the object as the shadow rays pass through the object. Approximations can involve the thickness 3of the object and opacity to determine the translucency of the shadows.

[0029] FIG. 5 illustrates an example user interface for applying settings used in rendering volumetric transparency and shadows. First, for backscattering, the volumetric setting can be selected to apply the specific Beer-Lambert attenuation approximations as described above. Thin-Object Mode can limit computations to front-face attenuation, while Volumetric Mode can compute attenuation at both front-face and back-face hits. The Alpha Cutoff setting 502 is used to specify a maximum opacity above which an object will be treated as an opaque surface instead of a transmissive surface. This setting is commonly used in real-time render engines for objects such as leaves and grass, which are represented by low-poly "cards" upon which a texture is used to represent more complex silhouettes through a use of per-pixel alpha values. In this case, when the alpha value is above the alpha cutoff value, the leaf is rendered with the color taken from the texture and multiplied with the alpha value for blending against the background. Pixels with alpha values below the cutoff are not rendered. In this case, the same setting is reused for transmissive geometry, and the two material types are differentiated using the index of refraction (IOR) parameter. Alpha cutoff materials such as foliage are denoted with an IOR value of 0, while transmissive materials have IOR values above 0. When an object with an IOR of 0 and an alpha value above the cutoff is hit, the ray can be attenuated by opacity and albedo but can otherwise continue tracing in the original direction to achieve an alpha-blended effect. Opacities below the alpha cutoff can be treated as having an opacity of 0. For transmissive objects with IOR values above 0, transparency is modeled as described prior.

[0030] The Max Alpha Hits setting 504 can limit the performance impact of alphablended objects. This setting can stop additional ray casts after the specified number of hits on alpha-blended objects. The Approximate Spectral Transmission setting 506 can enable a separate model for spectral absorption in both transparent object rendering and transparent shadows. When enabled, this feature uses the attenuation computed at back faces to interpolate between the object's albedo and the object's subtractive transmission color. Then, this color is multiplied with the attenuation to further attenuate the radiance through the object.

[0031] Under the Shadow settings, the attenuation approximations can be applied when ray-traced shadows are selected. When transparent shadows are disabled, all objects can cast shadows at a full darkness value of 0. When enabled, the system can modulate a 1- channel shadow value using opacity. Once enabled, additional independent options can become available under transparent shadow setting 508. For example, color can be enabled to modulate a 3-channel shadow color by applying opacity and albedo as described above. The textured setting, when disabled, uses a single albedo as an opacity value for an entire object, and assumes that an object's mesh is representative of the object being rendered. When enabled, the setting can read an albedo map to sample per-pixel albedo and opacity. Fresnel Attenuation can be enabled to modulate the shadow to approximate the visual appearance of attenuation due to Fresnel reflectance as light passes through the object. Absorption can be enabled to approximate volumetric absorption. These settings can be set as precompiled shader defines or available as uniforms for dynamic user alterations. The former may be preferable for performance, while the latter may be preferable for development. While FIG. 5 illustrates an example user interface, other formats or options can be applied to further affect rendering features. This user interface can be displayed on a mobile device display such as a touch screen or screen.

[0032] FIG. 6 depicts a block diagram of an example computer system 600 in which various of the embodiments described herein may be implemented. The computer system 600 includes a bus 602 or other communication mechanism for communicating information, one or more hardware processors 604 coupled with bus 602 for processing information. Hardware processor(s) 604 may be, for example, one or more general purpose microprocessors.

[0033] The computer system 600 also includes a main memory 606, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 604. Such instructions, when stored in storage media accessible to processor 604, render computer system 600 into a special-purpose machine that is customized to perform the operations specified in the instructions.

[0034] The computer system 600 further includes a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604. A storage device 610, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 602 for storing information and instructions.

[0035] The computer system 600 may be coupled via bus 602 to a display 612, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 614, including alphanumeric and other keys, is coupled to bus 602 for communicating information and command selections to processor 604. Another type of user input device is cursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.

[0036] The computing system 600 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.

[0037] In general, the word "component," "engine," "system," "database," data store," and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. [0038] The computer system 600 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 600 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 600 in response to processor(s) 604 executing one or more sequences of one or more instructions contained in main memory 606. Such instructions may be read into main memory 606 from another storage medium, such as storage device 610. Execution of the sequences of instructions contained in main memory 606 causes processor(s) 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.

[0039] The term "non-transitory media," and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610. Volatile media includes dynamic memory, such as main memory 606. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.

[0040] Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. [0041] The computer system 600 also includes a communication interface 618 coupled to bus 602. Communication interface 618 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 618 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.

[0042] A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet." Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 618, which carry the digital data to and from computer system 600, are example forms of transmission media.

[0043] The computer system 600 can send messages and receive data, including program code, through the network(s), network link and communication interface 618. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 618.

[0044] The received code may be executed by processor 604 as it is received, and/or stored in storage device 610, or other non-volatile storage for later execution.

[0045] Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a "cloud computing" environment or as a "software as a service" (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.

[0046] As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 600. [0047] As used herein, the term "or" may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, "can," "could," "might," or "may," unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.

[0048] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as "conventional," "traditional," "normal," "standard," "known," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.