MUSETH KEN (NZ)
STOMAKHIN ALEXEY (US)
MUSETH KEN (NZ)
CN110765698A | 2020-02-07 |
RAVINDRA AGLAVE: "CFD Simulation of Combustion Using Automatically Reduced Reaction Mechanisms: A Case for Diesel Engine", DISSERTATION UNIVERSITY OF HEIDELBERG,, 23 February 2007 (2007-02-23), pages 1 - 140, XP007921226
WHAT IS CLAIMED IS: 1. A computer-implemented method of simulating combustion processes, the computer-implemented method comprising: under the control of one or more computer systems configured with executable instructions: receiving a first set of data representing a fluid flow including a plurality of combustion precursors comprising at least one arbitrary combustion precursor; simulating a chemical reaction representing simulated combustion involving the at least one arbitrary combustion precursor and generating combustion byproducts; determining a change in temperature and a molar mass of the combustion byproducts due to the chemical reaction; determining a divergence of the combustion byproducts based on a combination of the change in the temperature and the change in molar mass; generating one or more data structures of the simulated combustion based on values of at least a first portion of the fluid flow. 2. The computer-implemented method of claim 1, further comprising: receiving user provided parameter values defining the at least one arbitrary combustion precursor, the chemical reaction being simulated based at least in part on the user provided parameter values. 3. The computer-implemented method of claim 2, further comprising: providing a user interface to a client computing device for display thereby, the user interface being configured to receive the user provided parameter values from a user. 4. The computer-implemented method of claim 2, wherein the plurality of combustion precursors comprises at least one non-arbitrary combustion precursor. 5. The computer-implemented method of claim 4, wherein at least one non- arbitrary combustion precursor is a linear alkane having a chemical composition with a form 6. The computer-implemented method of claim 4, wherein the user provided parameter values defining the at least one arbitrary combustion precursor is changed such that the at least one non-arbitrary combustion precursor is converted into an arbitrary combustion precursor. 7. The computer-implemented method of claim 1, wherein the plurality of combustion precursors comprises at least one non-combustible component. 8. The computer-implemented method of claim 1, wherein the divergence is determined based on the change in: temperature over time; or molar mass over time. 9. The computer-implemented method of claim 1, further comprising simulating the combustion byproducts as a second portion of the fluid flow having a variable density, wherein the variable density of the fluid flow varies in response to the divergence of a velocity field pertaining to the fluid flow. 10. The computer-implemented method of claim 1, wherein: at least a second portion of the fluid flow is treated as incompressible; or at least a second portion of the fluid flow is treated as compressible. 11. The computer-implemented method of claim 1, wherein the one or more data structures are configured for use by an animation process for generating one or more visual representations of a combustion event, the method further comprising: using a convolution kernel to simulate heat diffusion by blurring at least a portion of the one or more visual representations of the combustion event. 12. The computer-implemented method of claim 11, further comprising: deriving the convolution kernel from a heat equation. 13. A computer-implemented method comprising the method of any preceding claim and for generating a graphics output with a simulated combustion using the one or more data structures. 14. A computer system for simulating combustion processes, the system comprising: at least one processor; and a computer-readable medium storing instructions, which when executed by the at least one processor, cause the system to carry out the method of claim 1. 15. A non-transitory computer-readable storage medium storing instructions, which when executed by at least one processor of a computer system, cause the computer system to carry out the method of claim 1. |
[0097] FIG. 3 is a diagram illustrating combustion products obtained when a combustion, involving a combustion fuel (C n H 2n+2 ), is fuel-lean, stoichiometric, and fuel-rich with respect to oxygen in an atmosphere (αO 2 ). As illustrated in FIG. 3, the fuel-lean condition is met when α > α n for the combustion of C n H 2n+2 + αO 2 . The combustion products for the fuel-lean condition are thus nCO 2 + (n + 1)H 2 O + ( α — α 2 )O 2 . For the stoichiometric condition of α > α n for the combustion of C n H 2n+2 + αO 2 , the combustion products include nCO 2 + (n + 1)Η 2 O. For the fuel-rich condition of α < α n for the combustion of
C n H 2n+2 + αO 2 , the combustion products can be written as
The constant α n is defined by Equation 32 as
[0098] In Equation 33, the coefficient “a” represents the actual oxygen to fuel ratio. Thus, the fuel-to-oxidizer equivalence ratio (represented by the variable “λ”) for this reaction may be determined by Equation 33. [0099] If the thermodynamic and chemical properties of carbon dioxide (CO 2 ) and water vapor (H 2 O) are assumed to be the same (which is a reasonably good approximation), they can be combined into a single type of product, denoted “Prod.” This means one less chemical component needs to be tracked and leads to a simpler diagram illustrated in FIG. 4. [0100] FIG. 4 is a version of the diagram of FIG. 3 that is simplified by combining carbon dioxide (CO 2 ) and water vapor (H 2 O ) into a single product labeled “Prod.” As illustrated in FIG. 4, the fuel-lean condition is met when α > α n for the combustion of CnH2n+2 + αO2. The combustion products for the fuel-lean condition can be rewritten as (2n + 1)Prod + ( α - α 2 ) O 2 . For the stoichiometric condition of ^^ = ^^ ^ for the combustion of C n H 2n+2 + αO 2 , the combustion products can be rewritten as (2n + 1)Prod. For the fuel-rich condition of α < α n for the combustion of C n H 2n+2 + αO 2 , the combustion products can be written as [0101] A characteristic feature of non-premixed combustion of hydrocarbon fluid (e.g., diffusion flames) is the formation of soot, which is generally agreed to occur within a temperature range of 1300 K to 1600 K. While the chemistry and physics of soot formation in diffusion flames is exceedingly complex, it can be simplified. Equation 34 describes equilibrium. In Equation 34, an expression represents an equilibrium constant defined as a ratio of a concentration of soot to fuel. [0102] The equilibrium constant is assumed to be constant within the temperature range of 1300 K to 1600 K and zero otherwise. The value of the equilibrium constant depends at least in part on the type of fuel and may be available in a look-up-tables and/or included in the combustion data 150 (see FIG. 1). The value of the equilibrium constant is notably zero for methane at all temperatures, because combustion of this type of fuel in a diffusion flame does not produce soot in significant quantities. [0103] Soot is small particles (typically 10-100 nanometers) with 3000-10000 atomic mass units formed by a complex process of chemical reactions and coagulation. Though these particles are technically solids, the particles are small enough to be suspended and transported by the fluid flow. These hot soot particles are incandescent and emit a characteristic yellow glow, which gives rise to the term “luminous flames” for the diffuse (or non-premixed) flames that form soot. Conversely, flames that do not form soot are called non-luminous flames. [0104] During the formation of soot, typically interior to the flame-front, no oxidizer is present, but as the hot soot is ejected by buoyancy forces (caused by local density differences), the hot soot will invariably cross into regions (primarily at the flame tip) with oxidizer where the hot soot will react with the oxidizer to form products. Equation 35 (crudely) models this reaction. In Equation 35, a parameter “ β” scales with the size of the soot particles (or more precisely the number of carbon atoms per soot particle), and an expression represents a temperature dependent equilibrium constant that depends on the fuel source and size of the flame. [0105] If all of the soot particles are oxidized, the flame is said to be non-sooting and otherwise the flame is referred to as sooting. In other words, even flames that are not accompanied by raising (or visible) soot might still be producing soot internally and hence be luminous flames. Non-luminous flames are always non-sooting, and sooting flames are always luminous flames. [0106] When temperatures are high (typically around 2500K) products (e.g.,, H 2 O ) start to undergo chemical reactions themselves, that is the products separate or split into smaller molecules or atoms, a process that is referred to as dissociation. For example, carbon dioxide may dissociate into carbon monoxide (CO) and oxygen (O). To help reduce the number of chemical species, the combustion model 170 may ignore dissociation. [0107] The combustion model 170 may track the following five chemical species: C n H 2n+2 , oxygen (O 2 ), nitrogen ( N 2 ), Prod, and Soot. But, if the combustion model 170 represents each species as molar fractions, the combustion model 170 only needs to track four quantities because the fifth quantity can be derived or calculated from the other four quantities because the sum of all of the molar fractions must be one. In other words, the reduction in the number of quantities from five to four can enable a more efficient way to simulate the combustion model approximately, for example, by 20 percent, while keeping the sum of all of the molar fractions to be one. This can be accomplished, for example, by reducing the number of combustion products, as described above with respect to FIG. 4. In such instances, the combustion products for the fuel-lean condition can be rewritten from ^^ +CO 2 ( ^^ + 1) H 2 O + ( α - α 2 ) O n 2 to (2 + 1)Prod + (α - α 2 )O 2 . Similarly, the combustion products for the stoichiometric condition can be rewritten nCO + 2 ( n + 1) H 2 O to (2n + 1) Prod. On the other hand, the combustion products for the fuel-rich condition can be rewritten from By reducing the number of combustion byproducts, simulations can be performed in few number of computing cycles compared to tracking all possible chemical species or byproducts. [0108] Referring to FIG. 2, as mentioned above, the fire-triangle 200 includes the heat 240 that is required to ignite the combustion 210. Technically, this is referred to as activation energy needed to kick start a self-sustained chemical reaction and it is often quantified as the auto-ignition temperature. The auto-ignition temperature is the lowest temperature at which combustion is initiated (given the presence of both fuel and oxidizer). In Table 1 below, ignition temperatures are listed in both Celsius and kelvin for the eight first hydrocarbons of the form “C n H 2n+2 ”. As a general rule, ignition temperatures decrease as the size of the molecules increase. The auto-ignition temperatures shown in Table 1 may be included in the combustion data 150 (see FIG. 1). TABLE 1. Auto-ignition temperatures for the first eight hydrocarbons Adiabatic Flame Temperature [0109] At this point, a model for the chemical reaction of combustion has been provided along with a measure of heat at constant thermodynamic pressure (enthalpy, cf. Equation 21). The chemistry module 164 may use this information to compute the temperature obtained by the simulated combustion event. As mentioned above, the combustion model 170 assumes constant thermodynamic pressure in open air. Additionally, the combustion model 170 assumes the chemical reaction of combustion happens very fast (e.g., virtually instantaneously), which means the release of chemical heat has no time to be exchanged (as thermal heat) with the surroundings, resulting in an adiabatic isobaric combustion process. As shown in Equation 22, this implies that the total enthalpy for the combustion is conserved (or is zero). This enthalpy originates from the chemical reaction and is manifested as reactants at the initial temperature transforming into products at the final temperature. Normally, the initial temperature of the reactants is known, so the challenge is to compute the higher unknown final temperature of the products based on the conservation of energy, which for adiabatic isobaric combustion equals enthalpy. The higher unknown final temperature of the products for adiabatic isobaric combustion is referred to as an adiabatic flame temperature. [0110] Hess’s law may be used to compute the adiabatic flame temperature. Hess’s law states that a total change of enthalpy for a reaction is independent of a number of steps or stages of the reaction. In other words, the total enthalpy is the same whether the reaction is completed in one step or includes multiple steps. This is a consequence of enthalpy being a state variable, which means enthalpy is independent of the pathway from the reactants to the products of the chemical reaction. Combining Hess’s law with the assumption of adiabatic isobaric combustion means that the total enthalpy change occurring in multiple sub-steps must remain zero. This is important because it allows changes of enthalpies for sub-steps to be measured and tabled at fixed conditions and later applied to reactions that take place under different conditions. For example, changes of enthalpies for sub-steps may be included in the combustion data 150 (see FIG. 1). [0111] FIG. 5 illustrates different types of enthalpy for stoichiometric combustion of methane in air. Specifically, the diagram of FIG. 5 shows enthalpy changes for a chemical reaction at initial temperature (represented by the variable “ T 1 ”), and final temperature (represented by the variable “ T 2 ”). Both the initial and final temperatures are different from a standard temperature (represented by the variable “ T 0 ”). By way of a non-limiting example, the standard temperature may be a temperature (e.g., 298.15 K) at which enthalpy of formation (represented by an expression is known. The enthalpy of formation is defined as an enthalpy change occurring during the formation of one mole of a substance from its constituent elements at standard states (symbolized by ). [0112] By definition enthalpy is conserved for any adiabatic reaction at constant thermodynamic pressure. Equation 36 expresses the enthalpy of the reaction (represented by an expression as a difference between the enthalpy of formation for the products (represented by an expression and the enthalpy of formation for the reactants (represented by an expression [0113] Equation 36 is represented by a cyclic connection of a lower triangle 500 in FIG. 5. The cyclic connection represented by an upper rectangle 510 in FIG. 5 can be express as Equation 37. [0114] Combining Equation 36 and Equation 37 leads to Equation 38 below. [0115] Equation 38 is schematically represented as the cyclic connection of all five stages in FIG. 5. Note that while nitrogen ( N 2 ) has no effect on the enthalpy of formation (or the enthalpy of reaction), nitrogen ( N 2 ) does affect the sensible enthalpy (the two integrals over specific heat capacity). In other words, the presence of nitrogen lowers the adiabatic flame temperature because heat is required to increase the temperature of the adiabatic flame temperature. Because atmospheric air is made up of 78.2% nitrogen (and only 21% oxygen), nitrogen ( N 2 ) can have a significant effect on the actual temperature of a flame, which is why the combustion model 170 accounts for nitrogen ( N 2 .) [0116] Enthalpy of formation for most chemicals is available in tables and may be included in the combustion data 150 (see FIG. 1). Therefore, by using polynomial (or constant) approximations of the specific heat capacities (cf. Equation 27), Equation 38 can be solved for the final temperature (represented by the variable “ T 2 ”) when the initial temperature (represented by the variable “ T 1 ”) is known. The final temperature is referred to as the adiabatic flame temperature. Heat Transfer [0117] While the adiabatic flame temperature is fixed (when the initial temperature and the chemical reaction are known), surrounding gases that are not undergoing chemical combustion are subject to various processes that change the temperature locally. Simply stated, heat transfer can be defined as a passage of thermal energy from a first (hotter) material to a second (colder) material. This “passage” occurs typically via three distinct processes, namely, convection (or advection), conduction (or diffusion), and radiation. Physically, convection corresponds to heat (or thermal energy) transfer due to material transport, conduction corresponds to temperature differences of materials in contact, and radiation corresponds to electromagnetic emission and absorption. [0118] Deriving the transport and diffusion equations are relatively straightforward for incompressible fluids, but the combustion model 170 explicitly allows for variable density (as well as temperature). Below, derivations of the heat transfer equation used by the combustion model 170 are provided. Convection [0119] As a gas is being advected by a fluid velocity field, the temperature, which is a material property of the molecules of the gas, undergoes the same transport. Equation 39 calculates a thermal energy (or heat) per unit volume (represented by a variable “ Q den ”) contained in a fluid as a function of the density (represented by the variable “ρ”), an absolute temperature (represented by the variable “ T”), and the specific heat capacity at constant thermodynamic pressure (represented by the variable “ C P ”). The thermal energy (or heat) per unit volume is sometimes referred to as a thermal energy density of the fluid. Q den = ρC P T (Eqn. 39) [0120] Next, consider a small volume of fluid (represented by a variable “Ω”) with a surface area (represented by an expression “∂Ω”). The rate of change of thermal heat in the volume of fluid (represented by the variable “Ω”) may be expressed by Equation 40 and Equation 41. In Equation 40 and Equation 41, it is assumed that the specific heat capacity (represented by the variable “ C P ”) is time-independent within the volume of fluid (represented by the variable “Ω”), and the volume of fluid is fixed in time. [0121] If the fluid is moving with a velocity (represented by a variable the heat flux due to advection may be expressed as and the rate of change of thermal heat due to motion is given by Equation 42 and Equation 43 below. [0122] If (for now) convection (or fluid motion) is assumed to be the only mechanism of heat transfer, conservation of energy implies that the sum of Equation 41 and Equation 43 is zero. This relationship is shown in Equation 44 below. [0123] After some re-factoring this leads to Equation 45 below, which in turn implies (since the volume of fluid represented by the variable “Ω” is arbitrary) that the kernel of the volume integral must be zero. [0124] As explained above, the continuity equation (Equation 6) may be expressed as which when combined with ^^ > 0 and ^^ ^ > 0 leads to Equation 46 below. Equation 46 is a transport equation for the absolute temperature, which is a prototypical hyperbolic transport equation. [0125] Equation 46 resembles Equation 11, which is the transport equation for density in an incompressible flow. While Equation 46 models the physical phenomena of convection, the mathematical terminology is (hyperbolic) advection. Conduction [0126] Conduction is the transport of heat caused by physical contact between (instead of motion by) molecules. Heat flux (represented by a variable may be measured in energy per unit time and unit area (e.g., W/m 2 ). Equation 47 is Fourier’s law. According to Fourier’s law, the heat flux through a surface is proportional to a negative gradient of the temperature (e.g., measured in K/m) across the surface (represented by an expression In Equation 47, a variable “k” represents the thermal conductivity (e.g., measured in W/mK). The thermal conductivity (represented by the variable “k”) may be 0.026 W/(mK) for air. The subscript “dif” in the variable refers to diffusion, which is the fundamental mechanism for this type of heat transfer. [0127] Integrating the heat flux over the surface area (represented by the expression “∂Ω ”) of a small fluid volume (represented by the variable “Ω”) yields Equation 48 and Equation 49. [0128] Combining the effects of advection and diffusion implies Equation 50 and Equation 51 below. [0129] When ρ > 0, and C P > 0, Equation 52, which is a PDE, may be obtained. [0130] A first term on the right-hand-side of Equation 52 accounts for transport (advection) and a second term on the right-hand-side of Equation 52 models diffusion though the Laplacian differential operator. Physicists refer to this as convection- diffusion whereas mathematicians tend to call it advection-diffusion. Equation 52 reduces to Equation 53 below if the fluid is assumed to be stationary, [0131] In Equation 53, is known as the thermal diffusivity of the medium. Equation 53 is a prototypical parabolic PDE, which is simply dubbed “the heat equation.” But, it is important to note that Equation 53 accounts for only one of three fundamental mechanisms of heat transfer, namely conduction. Equation 52, which was derived for heat advection- diffusion for a compressible fluid model, is actually identical to its incompressible counterpart, typically employed in computer graphics. This implies that a computer process computing values using Equation 52 would not have to deal with additional complications in its numerical implementation. [0132] Solving the heat equation may be computationally expensive. But, a well-known mathematical technique exists for obtaining a general or fundamental solution to many PDEs, like the heat equation. Thus, a fundamental solution may be generated for the heat equation (and included in the combustion data 150). Then, concrete solutions can be generated from the fundamental solution using convolution. In computer graphics, convolution causes or is depicted as blurring. Such blurring may be applied to a visual representation using a convolution kernel (e.g., a Gaussian kernel). Thus, solving the heat equation may be reduced to blurring with a convolution kernel, which can be done efficiently. The convolution kernel may be derived from the heat equation. The size of the convolution kernel may be based at least in part on the timestamp (or time step), and the viscosity or the diffusion coefficient. For example, smaller timestamps (or time steps) may yield smaller convolution kernels. Radiation [0133] Heat transfer caused by electromagnetic radiation is notoriously difficult to model since such heat transfer is based on complex interactions over long distances (much like global illumination). As such, the combustion model 170 aims at capturing the essence of heat transfer caused by electromagnetic radiation and model it with an additional term to the convection-diffusion PDE (Equation 52) that approximates its behavior. [0134] A simple classical model for thermal radiation is the blackbody model and radiation from an idealized blackbody model can be expressed by Planck’s law, which is Equation 54 below. [0135] Planck’s law describes an amount of heat (or energy) emitted per unit time, wavelength, and surface area from an idealized blackbody model as a function of its absolute temperature (represented by a variable “T”) when the absolute temperature is greater than zero. When integrated over all wavelengths (represented by a variable “λ”), Planck’s law reduces to Equation 55, which is known as the Stefan-Boltzmann law. [0136] Equation 55 includes the Stefan-Boltzmann constant “ 10 -8 Wm -2 K -4 , ” in which the variable “k” represents the Boltzmann constant, a variable “ ^^” represents the speed of light in vacuum, and a variable “h” represents Planck’s constant. To allow for this simple power law to be applied to more general materials (sometimes called graybody materials), Equation 56 below introduces a unit-less parameter “∈.” In Equation 56, a variable represents an effect (e.g., measured in in watts (or J/s) pe 2 r unit area (m )) emitted in the normal direction (represented by a variable at all frequencies from a body with the temperature represented by the variable “T.” The temperature may be measured in Kelvin (K). When the parameter “∈” has a value that is greater than zero and less than or equal to one, the emissivity is that of a graybody. The parameter “∈” is equal to one for an ideal blackbody. [0137] Several problems arise when a model of thermal radiation for combustion is based on the Stefan-Boltzmann law. First, the model would apply only to solids and not gases. So, soot is actually the only material in the combustion model 170 that can be reasonably modeled by the Stefan-Boltzmann law. In fact, this is a fairly good model for the incandescence of luminous flames, but not for the light emission from the hot gases. Second, solving heat transfer due to blackbody radiation in a fluid, despite the deceivingly simple power relation, is exceedingly complicated. In fact, so much so that many scientific simulation techniques approximate and even ignore this effect. The reason is for this is that this resembles global illumination in complexity where all materials are simultaneously emitting and absorbing photons. The combustion model 170 attempts to model this phenomena with a term that at least captures the essence of this mechanism under severe assumptions. As such, the following may be considered an approximation. [0138] If a graybody with a temperature (represented by the variable “T”) is embedded in an ambient space with a constant temperature (represented by the variable “T amb ”), the normal flux of radiation per surface area away from this body may be determined by Equation 57. In Equation 57, the variable represents the local surface normal of the graybody. Both the material temperature and ambient temperature (represented by the variables “T” and “T amb ,” respectively) are measured in kelvin. [0139] For a volume (represented by the variable “Ω”) with a surface area (represented by the expression “∂Ω”), Equation 58 may be used to determine a total rate of change of energy due to heat flux. [0140] Equation 59 may be obtained when the volume (represented by the variable “Ω”) is a very small sphere, because both the temperature and the ambient temperature (represented by the variables “T” and “T amb ,” respectively) can be assumed to be constant and with a variable “r” representing the radius of the sphere. The radius may be conceptualized as being a characteristic length scale of the radiation model (such as the voxel size). [0141] The total energy balance from heat exchange due to the combined effects of convection, conduction and radiation may be expressed as Equation 60 below. [0142] Equation 60 implies Equation 61 below, which is an integral equation. [0143] Finally, since Equation 61 must hold for any small spherical volume, Equation 62 below, which is a PDE, may be obtained for the absolute temperature. Equation 62 is an equation of evolution for temperature. [0144] The model of radiation may be considered to be a gross oversimplification because the ambient temperature (represented by the variable “T amb ”) is far from constant in time or space, but the ambient temperature captures the spirit of radiative cooling. As such, the ambient temperature (represented by the variable “T amb ”) is defined as an average temperature of the surroundings that exchange heat through radiation with the fluid. [0145] For example, FIG. 6 illustrates the example visual content generation system 600 as might be used to generate imagery in the form of still images and/or video sequences of images. Visual content generation system 600 might generate imagery of live action scenes, computer generated scenes, or a combination thereof. In a practical system, users are provided with tools that allow them to specify, at high levels and low levels where necessary, what is to go into that imagery. For example, a user might be an animation artist (like artist 142 illustrated in FIG. 1) and might use visual content generation system 600 to capture interaction between two human actors performing live on a sound stage and replace one of the human actors with a computer-generated anthropomorphic non-human being that behaves in ways that mimic the replaced human actor’s movements and mannerisms, and then add in a third computer-generated character and background scene elements that are computer- generated, all in order to tell a desired story or generate desired imagery. [0146] Still images that are output by visual content generation system 600 might be represented in computer memory as pixel arrays, such as a two-dimensional array of pixel color values, each associated with a pixel having a position in a two-dimensional image array. Pixel color values might be represented by three or more (or fewer) color values per pixel, such as a red value, a green value, and a blue value (e.g., in RGB format). Dimensions of such a two-dimensional array of pixel color values might correspond to a preferred and/or standard display scheme, such as 1920-pixel columns by 1280-pixel rows or 4096-pixel columns by 2160-pixel rows, or some other resolution. Images might or might not be stored in a compressed format, but either way, a desired image may be represented as a two- dimensional array of pixel color values. In another variation, images are represented by a pair of stereo images for three-dimensional presentations and in other variations, an image output, or a portion thereof, might represent three-dimensional imagery instead of just two- dimensional views. In yet other embodiments, pixel values are data structures and a pixel value is associated with a pixel and can be a scalar value, a vector, or another data structure associated with a corresponding pixel. That pixel value might include color values, or not, and might include depth values, alpha values, weight values, object identifiers or other pixel value components. [0147] A stored video sequence might include a plurality of images such as the still images described above, but where each image of the plurality of images has a place in a timing sequence and the stored video sequence is arranged so that when each image is displayed in order, at a time indicated by the timing sequence, the display presents what appears to be moving and/or changing imagery. In one representation, each image of the plurality of images is a video frame having a specified frame number that corresponds to an amount of time that would elapse from when a video sequence begins playing until that specified frame is displayed. A frame rate might be used to describe how many frames of the stored video sequence are displayed per unit time. Example video sequences might include 24 frames per second (24 FPS), 50 FPS, 140 FPS, or other frame rates. In some embodiments, frames are interlaced or otherwise presented for display, but for clarity of description, in some examples, it is assumed that a video frame has one specified display time, but other variations might be contemplated. [0148] One method of creating a video sequence is to simply use a video camera to record a live action scene, i.e., events that physically occur and can be recorded by a video camera. The events being recorded can be events to be interpreted as viewed (such as seeing two human actors talk to each other) and/or can include events to be interpreted differently due to clever camera operations (such as moving actors about a stage to make one appear larger than the other despite the actors actually being of similar build, or using miniature objects with other miniature objects so as to be interpreted as a scene containing life-sized objects). [0149] Creating video sequences for story-telling or other purposes often calls for scenes that cannot be created with live actors, such as a talking tree, an anthropomorphic object, space battles, and the like. Such video sequences might be generated computationally rather than capturing light from live scenes. In some instances, an entirety of a video sequence might be generated computationally, as in the case of a computer-animated feature film. In some video sequences, it is desirable to have some computer-generated imagery and some live action, perhaps with some careful merging of the two. [0150] While computer-generated imagery might be creatable by manually specifying each color value for each pixel in each frame, this is likely too tedious to be practical. As a result, a creator uses various tools to specify the imagery at a higher level. As an example, an artist (e.g., artist 142 illustrated in FIG. 1) might specify the positions in a scene space, such as a three-dimensional coordinate system, of objects and/or lighting, as well as a camera viewpoint, and a camera view plane. From that, a rendering engine could take all of those as inputs, and compute each of the pixel color values in each of the frames. In another example, an artist specifies position and movement of an articulated object having some specified texture rather than specifying the color of each pixel representing that articulated object in each frame. [0151] In a specific example, a rendering engine performs ray tracing wherein a pixel color value is determined by computing which objects lie along a ray traced in the scene space from the camera viewpoint through a point or portion of the camera view plane that corresponds to that pixel. For example, a camera view plane might be represented as a rectangle having a position in the scene space that is divided into a grid corresponding to the pixels of the ultimate image to be generated, and if a ray defined by the camera viewpoint in the scene space and a given pixel in that grid first intersects a solid, opaque, blue object, that given pixel is assigned the color blue. Of course, for modern computer-generated imagery, determining pixel colors – and thereby generating imagery – can be more complicated, as there are lighting issues, reflections, interpolations, and other considerations. [0152] As illustrated in FIG. 6, a live action capture system 602 captures a live scene that plays out on a stage 604. Live action capture system 602 is described herein in greater detail, but might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. [0153] In a specific live action capture system, cameras 606(1) and 606(2) capture the scene, while in some systems, there might be other sensor(s) 608 that capture information from the live scene (e.g., infrared cameras, infrared sensors, motion capture (“mo-cap”) detectors, etc.). On stage 604, there might be human actors, animal actors, inanimate objects, background objects, and possibly an object such as a green screen 610 that is designed to be captured in a live scene recording in such a way that it is easily overlaid with computer- generated imagery. Stage 604 might also contain objects that serve as fiducials, such as fiducials 612(1)-(3), that might be used post-capture to determine where an object was during capture. A live action scene might be illuminated by one or more lights, such as an overhead light 614. [0154] During or following the capture of a live action scene, live action capture system 602 might output live action footage to a live action footage storage 620. A live action processing system 622 might process live action footage to generate data about that live action footage and store that data into a live action metadata storage 624. Live action processing system 622 might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. Live action processing system 622 might process live action footage to determine boundaries of objects in a frame or multiple frames, determine locations of objects in a live action scene, where a camera was relative to some action, distances between moving objects and fiducials, etc. Where elements have sensors attached to them or are detected, the metadata might include location, color, and intensity of overhead light 614, as that might be useful in post-processing to match computer-generated lighting on objects that are computer- generated and overlaid on the live action footage. Live action processing system 622 might operate autonomously, perhaps based on predetermined program instructions, to generate and output the live action metadata upon receiving and inputting the live action footage. The live action footage can be camera-captured data as well as data from other sensors. [0155] An animation creation system 630 is another part of visual content generation system 600. Animation creation system 630 might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. Animation creation system 630 might be used by animation artists, managers, and others to specify details, perhaps programmatically and/or interactively, of imagery to be generated. From user input and data from a database or other data source, indicated as a data store 632, animation creation system 630 might generate and output data representing objects (e.g., a horse, a human, a ball, a teapot, a cloud, a light source, a texture, etc.) to an object storage 634, generate and output data representing a scene into a scene description storage 636, and/or generate and output data representing animation sequences to an animation sequence storage 638. [0156] Scene data might indicate locations of objects and other visual elements, values of their parameters, lighting, camera location, camera view plane, and other details that a rendering engine 650 might use to render CGI imagery. For example, scene data might include the locations of several articulated characters, background objects, lighting, etc. specified in a two-dimensional space, three-dimensional space, or other dimensional space (such as a 2.5-dimensional space, three-quarter dimensions, pseudo-3D spaces, etc.) along with locations of a camera viewpoint and view place from which to render imagery. For example, scene data might indicate that there is to be a red, fuzzy, talking dog in the right half of a video and a stationary tree in the left half of the video, all illuminated by a bright point light source that is above and behind the camera viewpoint. In some cases, the camera viewpoint is not explicit, but can be determined from a viewing frustum. In the case of imagery that is to be rendered to a rectangular view, the frustum would be a truncated pyramid. Other shapes for a rendered view are possible and the camera view plane could be different for different shapes. [0157] Animation creation system 630 might be interactive, allowing a user to read in animation sequences, scene descriptions, object details, etc. and edit those, possibly returning them to storage to update or replace existing data. As an example, an operator might read in objects from object storage into a baking processor 642 that would transform those objects into simpler forms and return those to object storage 634 as new or different objects. For example, an operator might read in an object that has dozens of specified parameters (movable joints, color options, textures, etc.), select some values for those parameters and then save a baked object that is a simplified object with now fixed values for those parameters. [0158] Rather than requiring user specification of each detail of a scene, data from data store 632 might be used to drive object presentation. For example, if an artist is creating an animation of a spaceship passing over the surface of the Earth, instead of manually drawing or specifying a coastline, the artist might specify that animation creation system 630 is to read data from data store 632 in a file containing coordinates of Earth coastlines and generate background elements of a scene using that coastline data. [0159] Animation sequence data might be in the form of time series of data for control points of an object that has attributes that are controllable. For example, an object might be a humanoid character with limbs and joints that are movable in manners similar to typical human movements. An artist can specify an animation sequence at a high level, such as “the left hand moves from location (X1, Y1, Z1) to (X2, Y2, Z2) over time T1 to T2”, at a lower level (e.g., “move the elbow joint 2.5 degrees per frame”) or even at a very high level (e.g., “character A should move, consistent with the laws of physics that are given for this scene, from point P1 to point P2 along a specified path”). [0160] Animation sequences in an animated scene might be specified by what happens in a live action scene. An animation driver generator 644 might read in live action metadata, such as data representing movements and positions of body parts of a live actor during a live action scene. Animation driver generator 644 might generate corresponding animation parameters to be stored in animation sequence storage 638 for use in animating a CGI object. This can be useful where a live action scene of a human actor is captured while wearing mo- cap fiducials (e.g., high-contrast markers outside actor clothing, high-visibility paint on actor skin, face, etc.) and the movement of those fiducials is determined by live action processing system 622. Animation driver generator 644 might convert that movement data into specifications of how joints of an articulated CGI character are to move over time. [0161] A rendering engine 650 can read in animation sequences, scene descriptions, and object details, as well as rendering engine control inputs, such as a resolution selection and a set of rendering parameters. Resolution selection might be useful for an operator to control a trade-off between speed of rendering and clarity of detail, as speed might be more important than clarity for a movie maker to test some interaction or direction, while clarity might be more important than speed for a movie maker to generate data that will be used for final prints of feature films to be distributed. Rendering engine 650 might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. [0162] Visual content generation system 600 can also include a merging system 660 that merges live footage with animated content. The live footage might be obtained and input by reading from live action footage storage 620 to obtain live action footage, by reading from live action metadata storage 624 to obtain details such as presumed segmentation in captured images segmenting objects in a live action scene from their background (perhaps aided by the fact that green screen 610 was part of the live action scene), and by obtaining CGI imagery from rendering engine 650. [0163] A merging system 660 might also read data from rulesets for merging/combining storage 662. A very simple example of a rule in a ruleset might be “obtain a full image including a two-dimensional pixel array from live footage, obtain a full image including a two-dimensional pixel array from rendering engine 650, and output an image where each pixel is a corresponding pixel from rendering engine 650 when the corresponding pixel in the live footage is a specific color of green, otherwise output a pixel value from the corresponding pixel in the live footage.” [0164] Merging system 660 might include computer processing capabilities, image processing capabilities, one or more processors, program code storage for storing program instructions executable by the one or more processors, as well as user input devices and user output devices, not all of which are shown. Merging system 660 might operate autonomously, following programming instructions, or might have a user interface or programmatic interface over which an operator can control a merging process. In some embodiments, an operator can specify parameter values to use in a merging process and/or might specify specific tweaks to be made to an output of merging system 660, such as modifying boundaries of segmented objects, inserting blurs to smooth out imperfections, or adding other effects. Based on its inputs, merging system 660 can output an image to be stored in a static image storage 670 and/or a sequence of images in the form of video to be stored in an animated/combined video storage 672. [0165] Thus, as described, visual content generation system 600 can be used to generate video that combines live action with computer-generated animation using various components and tools, some of which are described in more detail herein. While visual content generation system 600 might be useful for such combinations, with suitable settings, it can be used for outputting entirely live action footage or entirely CGI sequences. The code may also be provided and/or carried by a transitory computer readable medium, e.g., a transmission medium such as in the form of a signal transmitted over a network. [0166] According to one embodiment, the techniques described herein are implemented by one or more generalized computing systems programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Special- purpose computing devices may be used, such as desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. [0167] For example, FIG. 7 is a block diagram that illustrates a computer system 700 upon which the computer systems of the systems described herein and/or visual content generation system 600 (see FIG. 6) may be implemented. Computer system 700 includes a bus 702 or other communication mechanism for communicating information, and a processor 704 coupled with bus 702 for processing information. Processor 704 may be, for example, a general-purpose microprocessor. [0168] Computer system 700 also includes a main memory 706, such as a random-access memory (RAM) or other dynamic storage device, coupled to bus 702 for storing information and instructions to be executed by processor 704. Main memory 706 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704. Such instructions, when stored in non-transitory storage media accessible to processor 704, render computer system 700 into a special-purpose machine that is customized to perform the operations specified in the instructions. [0169] Computer system 700 further includes a read only memory (ROM) 708 or other static storage device coupled to bus 702 for storing static information and instructions for processor 704. A storage device 710, such as a magnetic disk or optical disk, is provided and coupled to bus 702 for storing information and instructions. [0170] Computer system 700 may be coupled via bus 702 to a display 712, such as a computer monitor, for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704. Another type of user input device is a cursor control 716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 704 and for controlling cursor movement on display 712. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. [0171] Computer system 700 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 700 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 706. Such instructions may be read into main memory 706 from another storage medium, such as storage device 710. Execution of the sequences of instructions contained in main memory 706 causes processor 704 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. [0172] The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may include non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 710. Volatile media includes dynamic memory, such as main memory 706. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. [0173] Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire, and fiber optics, including the wires that include bus 702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. [0174] Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 704 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network connection. A modem or network interface local to computer system 700 can receive the data. Bus 702 carries the data to main memory 706, from which processor 704 retrieves and executes the instructions. The instructions received by main memory 706 may optionally be stored on storage device 710 either before or after execution by processor 704. [0175] Computer system 700 also includes a communication interface 718 coupled to bus 702. Communication interface 718 provides a two-way data communication coupling to a network link 720 that is connected to a local network 722. For example, communication interface 718 may be a network card, a modem, a cable modem, or a satellite modem to provide a data communication connection to a corresponding type of telephone line or communications line. Wireless links may also be implemented. In any such implementation, communication interface 718 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. [0176] Network link 720 typically provides data communication through one or more networks to other data devices. For example, network link 720 may provide a connection through local network 722 to a host computer 724 or to data equipment operated by an Internet Service Provider (ISP) 726. ISP 726 in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet” 728. Local network 722 and Internet 728 both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 720 and through communication interface 718, which carry the digital data to and from computer system 700, are example forms of transmission media. [0177] Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720, and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through the Internet 728, ISP 726, local network 722, and communication interface 718. The received code may be executed by processor 704 as it is received, and/or stored in storage device 710, or other non-volatile storage for later execution. [0178] Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. Processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory. The code may also be provided carried by a transitory computer readable medium e.g., a transmission medium such as in the form of a signal transmitted over a network. [0179] Conjunctive language, such as phrases of the form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with the context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of the set of A and B and C. For instance, in the illustrative example of a set having three members, the conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of the following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. [0180] The term ‘comprising’ as used in this specification means ‘consisting at least in part of’. When interpreting each statement in this specification that includes the term ‘comprising’, features other than that or those prefaced by the term may also be present. Related terms such as ‘comprise’ and ‘comprises’ are to be interpreted in the same manner. [0181] The use of examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention. [0182] In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. [0183] Further embodiments can be envisioned to one of ordinary skill in the art after reading this disclosure. In other embodiments, combinations or sub-combinations of the above- disclosed invention can be advantageously made. The example arrangements of components are shown for purposes of illustration and combinations, additions, re-arrangements, and the like are contemplated in alternative embodiments of the present invention. Thus, while the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. [0184] For example, the processes described herein may be implemented using hardware components, software components, and/or any combination thereof. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims and that the invention is intended to cover all modifications and equivalents within the scope of the following claims. [0185] In this specification where reference has been made to patent specifications, other external documents, or other sources of information, this is generally for the purpose of providing a context for discussing the features of the invention. Unless specifically stated otherwise, reference to such external documents or such sources of information is not to be construed as an admission that such documents or such sources of information, in any jurisdiction, are prior art or form part of the common general knowledge in the art. [0186] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.