Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEFOCUSING WITH PROJECTION GRID INCLUDING IDENTIFICATION FEATURES
Document Type and Number:
WIPO Patent Application WO/2014/011179
Kind Code:
A1
Abstract:
Light is projected in a pattern toward a surface where the elements of the pattern are of different color and/or intensity. These differences enable improved match-up of imaged points within the pattern captured by different camera or camera portions offset along a common optical axis for the purpose of defocusing imaging to make depth (z-axis) determinations from captured two-dimensional (x,y-axis) image data.

Inventors:
GHARIB MORTEZA (US)
Application Number:
PCT/US2012/046484
Publication Date:
January 16, 2014
Filing Date:
July 12, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CALIFORNIA INST OF TECHN (US)
GHARIB MORTEZA (US)
International Classes:
G06K9/34
Foreign References:
US20090295908A12009-12-03
US20080259354A12008-10-23
US20110074932A12011-03-31
US5604344A1997-02-18
Attorney, Agent or Firm:
ONE, LLP (West Tower Suite 110, Newport Beach CA, US)
Download PDF:
Claims:
CLAIMS

1. A imaging method for depth determination of an object, the method comprising:

projecting a pattern of light at a surface, the pattern including a plurality of elements differentiatable from each other;

capturing a plurality of offset images of reflected light from an object to provide image data, wherein the captured images differ only in element position; and

processing the image data by defocusing imaging, wherein differentiatable elements in the pattern are matched between the plurality of offset images for making depth determinations.

2. The method of claim 1 , wherein the elements are differentiatable by color.

3. The method of claim 2, wherein three different light colors are provided in a repeating sequence in a line.

4. The method of claim 3, wherein the repeating sequence is indexed one position in a plurality of successive lines to form the pattern.

5. The method of claim 1 , wherein the elements are differentiatable by intensity.

6. The method of claim 5, wherein three different light intensities are provided in a repeating sequence in a line.

7. The method of claim 6, wherein the repeating pattern is indexed one position in a plurality of successive lines to form the pattern.

8. The method of claim 1 , wherein the elements are differentiatable by size.

9. The method of claim 8, wherein three different light point sizes are provided in a repeating sequence in a line.

10. The method of claim 9, wherein the repeating pattern is indexed one position in a plurality of successive lines to form the pattern.

1 1 . The method of claim 1 , wherein the elements are differentiatable by shape.

12. The method of claim 1 1 , wherein three different light point shapes are provided in a repeating sequence in a line.

13. The method of claim 12, wherein the repeating pattern is indexed one position in a plurality of successive lines to form the pattern.

14. The method of claim 1 , wherein the pattern has no central addressable feature.

15. A defocusing imaging system comprising:

a pattern projector adapted to direct light at a surface with a pattern including a plurality of elements differentiatable from each other;

at least one camera adapted to capture a plurality of offset images of reflected light from the surface to provide image data, wherein the captured images differ only in element position; and

a processor, the processor adapted for making depth determinations by defocusing imaging, wherein differentiatable elements in the pattern are matched between the plurality of offset images.

16. The imaging system of claim 15, wherein light passes through a plurality of apertures, and the apertures are uncoded with respect to any of light wavelength, polarization, or pulse duration.

17. The imaging system of claim 15, wherein the projector is adapted to project elements differentiatable by color.

18. The imaging system of claim 15, wherein the projector is adapted to project elements differentiatable by intensity.

19. The imaging system of claim 15, wherein the projector is adapted to project elements differentiatable by size.

20. The imaging system of claim 15, wherein the projector is adapted to project elements differentiatable by shape.

Description:
DEFOCUSING WITH PROJECTION GRID INCLUDING IDENTIFICATION FEATURES

BACKGROUND

[0001] A number of patents, each including Dr. Morteza Gharib as an inventor and assigned to the California Institute of Technology, cover useful hardware configurations for performing profilometry and velocimetry using "defocusing" techniques. These patents include USPNs 6,278,847; 7,006, 132; 7,612,869 and 7,612,870.

[0002] The process of defocusing is one in which large data structures are generated by light capture with a CMOS, CCD or other imaging apparatus through restricted areas positioned at different radial locations along a common optical axis. Essentially, defocusing is based on computing depth from the predictable way images go out-of-focus when an object is imaged off of the focal plane. The shift of matched-up image point features or patterns imaged through the offset apertures is used to measure depth.

[0003] Sometimes scenes captured from different vantage points are combined to construct a model of an object larger than the field of view of the imaging apparatus utilizing the position or "pose" of the imager to "stitch" together or otherwise aggregate the 3-D data. To construct a 3-D model of an object with such accuracy that it can be reproduced in a physical model or otherwise, a vast number of such points or features are processed. Robust methodology for determining the pose of a camera is described in

PCT/US10/57532.

[0004] US Publication No. 2009/0295908 discloses a defocusing imaging system modified to use a projected pattern of laser dots specifically to acquire dense point cloud profilometry (i.e., 3D profile measurement) data for the observed object. Use of these laser point dot "markers" permit relatively high resolution imaging of a surface as the light is reflected from the object to be imaged, captured on a sensor, matched-up between apertures, and used to make z-axis determinations. Different color-coded apertures are employed to code the reflected light and make defocusing determinations for image aggregation.

[0005] Another approach involving defocusing and a projected pattern is disclosed in US Publication No. 201 1/0074932. A projector projects an optical pattern toward a surface. A camera with at least two off-axis apertures is arranged to obtain an image of the projected pattern with the defocused information. The camera is movable between different positions to image the surface from the different positions and the projector is set at a specified angle of at least 5 degrees relative to the camera. A processor carries out a first operation using information received through the apertures to determine a pose of the camera and to determine three dimensional information about the object based on a degree of deformation of the optical pattern on the imaged surface.

[0006] USPN 7,916,309 discloses various defocusing system options in which a light

projection system may be provided to project a predetermined pattern onto a surface of the desired object to allow mapping of unmarked surfaces in three dimensions. The pattern is shown in the form of a (regular/consistent) grid with a distinguishable center point. As stated, however, the pattern need not be symmetrical or contain a regular sequence of markers or images. The reference discloses examples of suitable addressable-pattern information, which are a color sequence pattern, a pattern comprising differently shaped object, a position sequence pattern, distinguishable object features or object landmarks, or any combination thereof. Yet, whatever the properties of the projected pattern, in use it is static with respect to the contours of the imaged object. The camera moves and takes images from multiple viewpoints, each containing at least a portion of the addressable pattern such that an address can be assigned to each point in the pattern. Moreover, it is stated that a prerequisite for the system is that the patterns of captured light from each viewpoint are physically separate. In a single-lens defocusing system, it is stated that light through the apertures must be separated by prisms, fiber optics, or color filters so as to record separate image information from each viewpoint.

[0007] Thus, each of the '908, '932 and '309 systems are limited to highly selective

projection techniques, image separation and/or data processing techniques, plus hardware associated with those restrictions. Their teachings are only applicable in a limited context.

[0008] In a hypothetical (prior art) defocusing system with no filter or sensor coding (e.g, as taught in USPN 7,612,870 and/or 7,89,078), a number of apertures each act as a pinhole camera for which the imager records an equal number of images (i.e., 3 apertures will yield 3x as many image points on the sensor). When the images captured contain numerous points, many of these will inevitably overlap resulting in data loss. In many cases it may also not be possible to identify the aperture source of the points (i.e., if it cannot be determined which points come from which aperture). The resulting ambiguity leads to "ghost" particles - phantom matches of images that are near, but not truly representative of the correct object data. [0009] Recognizing image triplets vs. doublets can offer certain advantages in matching points. Triplet, quadruplet, etc., grouping is more distinctive than that of two points defining a line. Also, inclusion of more pattern points enables the averaging out of errors when locating the points. However, with the increasing number of apertures (to define triplets, quadruplets, etc.) computational intensity increases, as well as the prospect for image crowding as discussed above.

[0010] By using channels separated by color, polarization, space or time, or by using a static addressable pattern, each recorded point for a given sensor area is known to correspond to a given aperture thereby greatly assisting in point match-up without interference. Such distinction also assists in point/feature matching and, thus, defocusing as a process. However, none of these achieve the benefits of reduced image crowding without some sort of optical tract modification or severe functional compromise (as in the static projection example).

[0011] The embodiments described herein offer image-crowding solutions without such complications or limitations. Especially in connection with object scanning to measure and/or define surface topography, these embodiments offer marked advantages, such as the potential for dramatically increased point density acquisition relative to known systems. Other advantages are provided, or will become apparent to those of ordinary skill in the art, in the description set forth herein.

SUMMARY

[0012] The present inventions offer improvement to known defocusing systems, in essence, by embedding information in a projected light pattern sent to the object to be imaged. When the imager is mobile, so too is the projector associated therewith. Otherwise, the camera and the projector may be stationary and the imaged object moved. In any case, the projected information is not static with respect to the object being imaged or tracked, and may be transmitted as reflected light through a plurality of un-coded apertures and received on a sensor with the light patterns from each aperture overlapping the other.

[0013] The reason why such an approach does not simply result in an image crowding

problem (as potentially is the case with the laser light marker pattern reflection received through the apertures in the '908 publication) has to do with the nature of the information embedded in the projected pattern. In one embodiment, the pattern includes different color points; in another embodiment, the pattern includes different intensity points; in yet another, different size points; and in still another, different shaped points/dots may be used. Likewise, a combination of such differentiation may be employed. In any case, the elements of the pattern (referred to as points, dots, markers, features, etc.) are distinguishable or differentiated from one another, thus serving as identification features within the pattern.

[0014] The "different-color" light may be in and/or out of the visible range of frequencies.

Keeping the light in the visible range may be advantageous when employing commonly available light filters (e.g., on a sensor) as further described below. However, filter technology available in the near-IR and near-UV to extent the range of usable

frequencies with custom sensor(s) is commonly available as well. Likewise, the different intensity light may be visible, but may more advantageously be in the IR or near-IR spectrum so that it will go unnoticed by a user.

[0015] In each approach, the number of similar reflected light patterns (captured in

duplicate, triplicate, etc.) correspond to the number of apertures. Of course, the patterns will differ slightly due to aperture offset from the optical axis. And even though the patterns may overlap (when images received through a given aperture are not routed to a dedicated sensor or sensor portions), it can be determined which portion of the overall captured signal is provided by one pattern through a first aperture and another pattern through a second aperture, etc.

[0016] The number of colors and/or intensities, etc., employed will depend on the situation.

The selection is a compromise between complexity (system and computational complexity) and the spatial resolution the system can achieve (i.e., how tight a point mesh or how short a nodal distance can be achieved between points defining a surface). Providing a greater number of intensities, colors, sizes, shapes, combinations, etc., allows subdivision of the space between each feature into finer granulation. However, with more levels, greater control is required for the structured lighting, better resolution between image intensities, colors, etc., more computational power for resolving 3D data, etc.

[0017] In a given pattern, the variation in a differentiating feature provides information about which imaged dots within the pattern should be matched by the computer. This information is what reduces the crowding problem noted above. As a secondary effect of reduced crowding, the front-to-back "range" of depth that can be determined may also increase. [0018] In any case, a known challenge in practicing defocusing concerns the ability to determine which of multiple simultaneously imaged points captured on an imager correspond to one another. For defocusing to be possible, matches must be made. As such, the additional information of the projected patterns can be of assistance. For example, in the case where it is known that a green marker should be to the right of a red one, then the program can ignore nearby red or blue dots in making matches.

[0019] The color information offered with a colored projected pattern may be resolved using a typical color image sensor (e.g., a CMOS or CCD unit incorporating a Bayer filter). For signals of different intensity, a CMOS or CCD without any filter may be preferred.

[0020] In any event, the signal differentiation present between captured image points allows for computationally simple comparison across sets of points. This matter then permits use of an extremely dense grid field, with high certainty in dot matches. A method of performing the same is described in particular detail below.

[0021] Interestingly, image crowding on the sensor (also referred to as an imager or

camera) may also be less problematic because a fraction of the overlap occurring between different-color markers is filtered out. With any of the single-class differentiation methods (i.e., by color, intensity, size, shape, polarization) or a combination thereof it is also possible to disqualify points exceeding a certain threshold intensity and/or ones that are misshapen in intensity. In other words, such points will be known overlaps and the information can be used in more complex computer processing. Note, however, that the various advantages of the subject pattern matching to avoid the above-referenced "ghost" particles may generally be more important. Mismatched points or particles can result in large errors, whereas image overlap generally results in fewer image points for data.

[0022] Regardless, because of the various advantages described related to minimizing problems associated with image crowding, significantly higher projected pattern densities can be utilized for defocusing imaging than in the '908 application. The increased density offers improved spatial resolution in imaging as described above.

[0023] The matching advantages can be employed as described above for complex three- dimensional (or four-dimensional tracking involving object movement in time) surface mapping calculations and associated data structure output. However, the advantages can be used as effectively in simpler systems as well. In any event, the advantages available from the teachings herein allow for precise object mapping and tracking. As such, it can capture fine hand gestures, enabling a new range of applications. For such purposes, it may be preferred to use IR projection so as not to distract the user. In addition, a display may picture "virtual" tools, handle interfaces, etc., moving in coordination with the captured hand gestures.

[0024] Without employing large or exaggerated gestures (as required in use of the

commercially-available KINECT system by Microsoft, Inc.), sign language (e.g., ASL signs) and other subtle gestures associated with games of chance (such as the "hit" or "stand" gestures in playing the card game Blackjack) can also be captured. Further use of defocusing principles for such purposes can be appreciated in reference to commonly owned US Provisional Patent Application No. 61/569, 141 , filed December 9, 201 1 , having the same inventor hereof and incorporated by reference herein in its entirety.

[0025] Also provided here is software adapted to manipulate data structures to embody the methodologies described. These methodologies necessarily involve computational intensity beyond human capacity. They must be carried out in connection with a processor (i.e., standard or custom/dedicated) in connection with memory (i.e., electronic, magnetic, optical, spintronic, etc.). As such, the necessary hardware includes computer electronics. The hardware further includes an optical system with one or more sensors as described above provided in an arrangement suitable for defocusing imaging.

[0026] In contrast to the references noted above, the imaging apertures are

indistinguishable from each other (i.e., not paired with different color filters, different polarization masks, set with optics to isolate associated sensors or sensor portions, time- wise switched with shutters or other means, etc.). In the multiple intensity embodiments, the apertures may be filtered with the same "color" to eliminate signal noise from other reflected light without introducing aperture-to-aperture differentiation.

[0027] Regardless, a projection system is provided. It is adapted to project a pattern of different size, intensity, shape and/or color light markers. Preferably the grid pattern is regular in order to achieve maximized (for a given application) spatial resolution - though not necessarily so. Further, the pattern optionally, and advantageously, includes no individually distinguishable point(s) - as in an addressable center point per the '309 patent. A laser may be used in the embodiments , as well as an LED or any other kind of other light generator for the light source. The light patterns may be provided by multiplying these entities or using beam splitters, holographic lenses, or diffraction gratings (i.e., as in a diffractive optical element (DOE)). Alternatively, one can also devise a system using a scanning laser, similar in operation to CRT technology. In any case, all examples are provided in a non-limiting sense. [0028] Likewise, in certain embodiments it should be understood that the illumination and the sensor need not be on the same side of the surface such as, for example, if measuring on a semi-opaque film. It should also be clear, from those embodiments where the depth is extracted from the position of the projected dots, that the detector need not be concentric with the central axis of the illumination as any offset can be compensated for in the initial calibration. Aspects of the inventions include the subject hardware, software, and related methods of operation and manufacture.

BRIEF DESCRIPTION OF THE DRAWINGS

[0029] The figures provided are diagrammatic and not drawn to scale. Variations of the inventions from the embodiments pictured are contemplated. Accordingly, depiction of aspects and elements of the inventions in the figures are not intended to limit the scope of the inventions.

[0030] Figs. 1 , 2, 3A, and 4 illustrate imaging system embodiments employing the subject patterns including differentiated elements within the same; Fig. 3B illustrates example variations in the shape of a point in a pattern; and Fig. 5 is a flowchart concerning imaged point matching.

DETAILED DESCRIPTION

[0031] Various example embodiments are described below. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the present inventions. Various changes may be made to the embodiments described and equivalents may be substituted without departing from the true spirit and scope of the inventions. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit or scope of the present inventions. All such modifications are intended to be within the scope of the claims made herein.

[0032] Fig. 1 depicts a structured light projection 100 that is a regular grid with a regular variation in the size of the points. The known variation in light point size can be used to determine which same-size points correspond to each other in the two (or more) cameras (such determination of same-type point pairs at different locations within the pattern indicated by example pointer pairs α, β, γ). The distance between thusly-matched points can be used to measure depth through defocusing. [0033] The hardware employed may include cameras 120 (or representative camera portions of a single camera) a projector 130 including its light source(s) - of any type as noted above or as otherwise implemented - for creating the pattern. Any such hardware may be housed separately or grouped together as indicated with dashed line including with a computing device 140. The computing device can be a microprocessor and graphics processing unit (GPU) included with a video card for optional output display 150, operating according to a stored program instructions in connection with digital storage media 160. (Various internal component connections now shown.)

[0034] Notably, whether physically separate camera imagers are used or a single imager used, optionally with single lens and multiple apertures, it is to be understood that the reflected images captured on the imager, separate imager portions or separate imagers are not color or otherwise filtered or time-wise separated to alter the incoming signal with respect to each other. The only differences intended in the captured light passed through associated len(s) and aperture(s) to the sensor(s) derive from differences in viewpoint (e.g., as determined by offset aperture position along a common optical axis) recorded on the sensor(s). In other words, the captured light elements of the pattern differ only (i.e., "only" in any significant sense where image processing is possible) in their position in an absolute sense and/or with the given pattern. Within the definition intended, it is contemplate that some elements of the pattern may not be recognizable or are otherwise corrupted by incomplete reflection, etc.

[0035] In a hardware example where only a single lens and single sensor is used with

multiple offset apertures for defocusing imaging, the different offset images captured will be superimposed on the camera sensor. In a hardware example with a single lens and offset aperture mask with light from each aperture directed to a separate sensor or sensor portion, the captured light patterns may be combined if properly registered. The same holds true for examples with altogether separate lens arrangements that share a common optical axis. In any case, such hardware is suitable for capturing a plurality of offset images of reflected light patterns from an object to provide image data in which the captured images differ only in element position.

[0036] More generally, an embodiment may be described as comprising at least one

camera, imaging reflected light from a pattern provided by a projector directed at an object, the light enters the at least one camera passing through at least one lens and a plurality of offset restricted regions as may be provided by apertures offset from a common optical axis. The entire sensor or a portion of a sensor within each such camera captures one or more offset images. Elements within the projected patern are distinguishable from one another and these are matched between separate images from different cameras that may be combined or laid over images captured in connection with a single camera.

[0037] In Fig. 2, a structured light projection 102 having a grid with regular variation in the intensity of points is shown (the different intensities being indicated by gray-scale or per formal drawing standards - stippling densities).

[0038] In Fig. 3A, use of a structured light projection 104 that is a regular grid with a regular variation in the shape of points is shown (exemplary shapes of "+" "x" and "o" being employed). However, any recognizable variation in individual element shape may be employed such as those shown in Fig. 3B.

[0039] Fig. 4 depicts use of a structured light projection 106 that is a regular grid with a regular variation in the color of points (the different colors indicated as red, blue and green - advantageously selected for any coordinated use with an off the shelf BAYER filter sensor or - or by different hatching per formal drawing standards).

[0040] In each such pattern 100, 102, 104 and 106, three differentiatable elements are used in a regular pattern or in a repeating sequence, in repeating lines. In each successive line (as viewed vertically or horizontally) within the grid pattern, the elements are indexed by one position.

[0041] As referenced above, the pattern may include additional colors, intensities, sizes and/or shapes for distinction within the pattern to the three-distinct-element embodiments pictured. It may alternatively include only two such elements in the projected pattern. In which case, a polarization-based variation with a custom polarizing filter associated with the sensor may be employed in spite of know difficulties that can be encountered with changes in polarity of reflected light.

[0042] Alternatively, more widely separated colors (such as red and blue colors that present very little overlap with commercially available sensors) may be employed. Though neither greater-than-three nor less-than-three in a pattern approaches are illustrated, these and other examples should be understood to be within the scope of the subject disclosure. The same holds true for the combined approaches referenced above, regardless of the potential permutations.

[0043] In Fig. 5, a method of point matching and continuation of the method for defocusing- based operation is shown. At 200, imaging is performed, optionally with the noted hardware. This may produce a combined scene on a single imager or separate scenes for different cameras. A distinguishable feature (e.g., imaged color point) is identified at 210 within a running software program. At 220, the program looks to comparable same- type featured points in the same scene (e.g., based on a triplet relationship) or a different scene captured in connection with a different camera or portion (e.g., based on the relation provided by a calibration set). Matching points are thus provided at 230. The process may repeat as indicated by the dashed line.

[0044] Otherwise, any further processing for defocusing-based depth finding for a given match occurs at 300. Image aggregation from multiple scenes taken with different camera pose or position may be aggregated at 310 (e.g., as per below). Final output of surface data or control signals for another unit such as a gaming machine, etc., are output at 320.

[0045] In certain examples, the process stops after point matching and defocusing depth determination. In other examples, any or all of the depth determination for a given imaged "scene" may be aggregated with other related image scene data taken for an object larger than the imager field of view. In which case, the teachings for approaches to calculating camera pose, transforming and aggregating image data as presented in PCT/US10/57532 may be applied or others as may be apparent to those with skill in the art.

[0046] Although several embodiments have been disclosed in detail above, other

embodiments are possible and the inventors intend these to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in numerous other ways. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, other forms of processing can be used. Any camera type can be used, including a CCD camera, active pixel, or any other type. Also, other shapes of apertures can be used, including round, oval, triangular, and/or elongated. The above devices can be used with color filters for coding different apertures, but can also be used with polarization or other coding schemes.

[0047] The cameras described herein can be handheld portable units, or machine vision cameras, or underwater units. Or the camera may be mounted in a stationary position with an object moved relative to them or otherwise configured. Still further, the camera may be worn by a user to record facial expressions or gestures to be blended with animation. Other possibilities exist as well such as noted in the Summary above. [0048] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Indeed, given the type of pixel-to-pixel matching for imaged points and associated calculations required with the data structures recorded and manipulated, computer use is necessary. In imaging any object, vast sets of data are collected and stored in a data structure requiring significant manipulation in accordance with imaging principles - including defocusing principles/equations - as noted herein and as incorporated by reference.

[0049] To clearly illustrate this interchangeability of hardware and software, various

illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described

functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments.

[0050] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein, may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any

combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor can be part of a computer system that also has a user interface port that communicates with a user interface, and which receives commands entered by a user, has at least one memory (e.g., hard drive or other comparable storage, and random access memory) that stores electronic information including a program that operates under control of the processor and with communication via the user interface port, and a video output that produces its output via any kind of video output format, e.g., VGA, DVI, HDMI, display port, or any other form.

[0051] A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. These devices may also be used to select values for devices as described herein.

[0052] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically

Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.

[0053] In one or more exemplary embodiments, the functions described may be

implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory storage can also be rotating magnetic hard disk drives, optical disk drives, or flash memory based storage drives or other such solid state, magnetic, or optical storage devices. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The computer readable media can be an article comprising a machine-readable non-transitory tangible medium embodying information indicative of instructions that when performed by one or more machines result in computer implemented operations comprising the actions described throughout this specification.

[0054] Operations as described herein can be carried out on or over a website. The website can be operated on a server computer, or operated locally, e.g., by being downloaded to the client computer, or operated via a server farm. The website can be accessed over a mobile phone or a PDA, or on any other client. The website can use HTML code in any form, e.g., MHTML, or XML, and via any form such as cascading style sheets ("CSS") or other.

[0055] Also, the inventors intend that only those claims which use the words "means for" are intended to be interpreted under 35 USC 1 12, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims. The computers described herein may be any kind of computer, either general purpose, or some specific purpose computer such as a workstation. The programs may be written in C, or Java, Brew or any other programming language. The programs may be resident on a storage medium, e.g., magnetic or optical, e.g. the computer hard drive, a removable disk or media such as a memory stick or SD media, or other removable medium. The programs may also be run over a network, for example, with a server or other machine sending signals to the local machine, which allows the local machine to carry out the operations described herein.

[0056] Where a specific numerical value is mentioned herein, it should be considered that the value may be increased or decreased by 20%, while still staying within the teachings of the present application, unless some different range is specifically mentioned. Where a specified logical sense is used, the opposite logical sense is also intended to be encompassed.

[0057] The previous description of the disclosed example embodiments is provided to

enable those persons of ordinary skill in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the inventions. Thus, the present inventions are not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.