Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETERMINING DEPTH DATA FOR A CAPTURED IMAGE
Document Type and Number:
WIPO Patent Application WO/2014/172508
Kind Code:
A1
Abstract:
A method, system, and one or more computer-readable storage media for depth acquisition from density modulated binary patterns are provided herein. The method includes capturing a number of images for a scene using an IR camera and a number of IR lasers including diffraction grates. Each image includes a density modulated binary pattern carrying phase information. The method also includes performing pixel based phase matching for the images to determine depth data for the scene based on the phase information carried by the density modulated binary patterns.

Inventors:
XIONG ZHIWEI (US)
WU FENG (US)
YANG ZHE (US)
Application Number:
PCT/US2014/034436
Publication Date:
October 23, 2014
Filing Date:
April 17, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MICROSOFT CORP (US)
International Classes:
G01B11/25; G01B11/22; G06T7/00
Foreign References:
US20100118123A12010-05-13
US20130002860A12013-01-03
Other References:
YAJUN WANG AND SONG ZHANG: "Three-dimensional shape measurement with binary dithered patterns", APPLIED OPTICS, OPTICAL SOCIETY OF AMERICA, WASHINGTON, DC; US, vol. 51, no. 27, 20 September 2012 (2012-09-20), pages 6631 - 6636, XP001578474, ISSN: 0003-6935, [retrieved on 20120920], DOI: 10.1364/AO.51.006631
ZHANG S: "Flexible 3D shape measurement using projector defocusing: extended measurement range", OPTICS LETTERS, OPTICAL SOCIETY OF AMERICA, US, vol. 35, no. 7, 1 April 2010 (2010-04-01), pages 934 - 936, XP001552964, ISSN: 0146-9592
YUEYI ZHANG ET AL: "Hybrid structured light for scalable depth sensing", IMAGE PROCESSING (ICIP), 2012 19TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 30 September 2012 (2012-09-30), pages 17 - 20, XP032333106, ISBN: 978-1-4673-2534-9, DOI: 10.1109/ICIP.2012.6466784
Download PDF:
Claims:
CLAIMS

1. A method for depth acquisition from density modulated binary patterns, comprising:

capturing a plurality of images for a scene using an IR camera and a plurality of IR lasers comprising diffraction grates, wherein each of the plurality of images comprises a density modulated binary pattern carrying phase information; and

performing pixel based phase matching for the plurality of images to determine depth data for the scene based on the phase information carried by the density modulated binary patterns.

2. The method of claim 1 , wherein performing pixel based phase matching for the plurality of images comprises:

extracting the phase information from the density modulated binary pattern for each of the plurality of images; and

reconstructing the depth data for the scene based on a combination of the phase information for the density modulated binary patterns for all of the plurality of images.

3. The method of claim 1, comprising determining an absolute depth for the scene based on the depth data.

4. The method of claim 1, comprising generating a three-dimensional image of the scene based on the depth data.

5. The method of claim 1, wherein capturing the plurality of images comprises capturing three images using an IR camera and three IR lasers comprising diffraction grates.

6. The method of claim 1, comprising using a local uniqueness of each density modulated binary pattern to correct phase ambiguity within each of the plurality of images.

7. The method of claim 1, comprising determining depth data for a portion of the scene comprising a moving object.

8. The method of claim 1, comprising determining the phase information for the density modulated binary patterns based on a density of light spots in each of the plurality of images.

9. The method of claim 1, comprising using the depth data for a gaming application or a virtual reality application.

10. One or more computer-readable storage media for storing computer- readable instructions, the computer-readable instructions providing a system for depth acquisition from density modulated binary patterns when executed by one or more processing devices, the computer-readable instructions comprising code configured to: acquire images for a scene, wherein the images are recursively captured using an IR camera and a plurality of IR lasers comprising diffraction grates, and wherein each image comprises a density modulated binary pattern carrying phase information;

correct phase ambiguity within the images based on a local uniqueness of each density modulated binary pattern; and

perform pixel based phase matching for the images to reconstruct depth data for the scene based on the phase information carried by the density modulated binary patterns.

Description:
DETERMINING DEPTH DATA FOR A CAPTURED IMAGE

BACKGROUND

[0001] Systems for generating three-dimensional images of scenes rely on depth reconstruction techniques to determine the three-dimensional shapes of objects within the scenes. Some current systems utilize one-shot structured light based depth cameras to determine depth data for captured images. Such one-shot structured light systems use the depth cameras to emit patterns including random light spots, and then capture images including the emitted patterns. Depth data can then be determined by establishing the correspondences between a reference image and captured images including the patterns. However, depth data that is determined in this manner often suffers from holes and severe noise. In particular, the positions of the light spots are identified by blocks of pixels. Such blocks of pixels may be deformed when projected onto the boundary of objects within scenes. This may make it difficult to identify the correspondences between the images. Furthermore, the identified correspondences between the images may have limited accuracy in the case of abrupt depth change. As a result, the determined depth data may include random errors.

[0002] Phase shifting systems, which rely on the projection of a series of phase shifted sinusoidal patterns onto a scene, often provide higher quality depth data than the one-shot structured light systems described above. Such phase shifting systems can reconstruct depth data at every camera pixel with one set of captured images. Thus, the depth data may have higher spatial resolution. In addition, the depth data may be calculated from sinusoidal phase differences. As a result, noise may be suppressed, and the depth data may be more accurate. However, such systems rely on the use of projectors for the

reconstruction of the depth data, as the sinusoidal patterns are grayscale.

SUMMARY

[0003] The following presents a simplified summary of the present embodiments in order to provide a basic understanding of some aspects described herein. This summary is not an extensive overview of the claimed subject matter. It is intended to neither identify critical elements of the claimed subject matter nor delineate the scope of the present embodiments. Its sole purpose is to present some concepts of the claimed subject matter in a simplified form as a prelude to the more detailed description that is presented later.

[0004] An embodiment provides a method for depth acquisition from density modulated binary patterns. The method includes capturing a number of images for a scene using an IR camera and a number of IR lasers including diffraction grates. Each image includes a density modulated binary pattern carrying phase information. The method also includes performing pixel based phase matching for the images to determine depth data for the scene based on the phase information carried by the density modulated binary patterns.

[0005] Another embodiment provides a system for depth acquisition from density modulated binary patterns. The system includes a number of IR lasers. Each IR laser is configured to emit a density modulate binary pattern carrying phase information onto a scene via a diffraction grate. The system also includes an IR camera configured to capture an image corresponding to the density modulated binary pattern emitted by each IR laser. The system further includes a processor and a system memory including code that, when executed by the processor, is configured to analyze the images to determine depth data for the scene based on the phase information carried by the density modulated binary patterns.

[0006] In addition, another embodiment provides one or more computer-readable storage media for storing computer-readable instructions. The computer-readable instructions provide a system for depth acquisition from density modulated binary patterns when executed by one or more processing devices. The computer-readable instructions include code configured to acquire images for a scene. The images are recursively captured using an IR camera and a number of IR lasers including diffraction grates. Each image includes a density modulated binary pattern carrying phase information. The computer-readable instructions also include code configured to correct phase ambiguity within the images based on a local uniqueness of each density modulated binary pattern. The computer-readable instructions further include code configured to perform pixel based phase matching for the images to reconstruct depth data for the scene based on the phase information carried by the density modulated binary patterns.

[0007] The following description and the annexed drawings set forth in detail certain illustrative aspects of the claimed subject matter. These aspects are indicative, however, of but a few of the various ways in which the principles of the present embodiments may be employed, and the claimed subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features of the claimed subject matter will become apparent from the following detailed description of the present embodiments when considered in conjunction with the drawings. BRIEF DESCRIPTION OF THE DRAWINGS

[0008] Fig. 1 is a block diagram of a computing system that may be used to determine depth data for captured images using density modulated binary patterns according to embodiments described herein;

[0009] Fig. 2 is a schematic of an imaging device that may be used to capture images according to embodiments described herein;

[0010] Fig. 3 is a schematic showing density modulated binary patterns that may be used according to embodiments described herein;

[0011] Fig. 4 is a graph showing timing circuits for controlling the emission of the density modulated binary patterns by the three IR lasers according to embodiments described herein;

[0012] Fig. 5A is a schematic showing a captured image with an emitted pattern;

[0013] Fig. 5B is a schematic showing an image including the reconstructed depth data for the captured image of Fig. 5 A through the block matching technique;

[0014] Fig. 5C is a schematic showing an energy image generated from the captured image of Fig. 5 A;

[0015] Fig. 5D is a schematic showing an image including depth data for the image of Fig. 5A that may be reconstructed from three energy images according to embodiments described herein;

[0016] Fig. 6 is a process flow diagram of a method for depth acquisition from density modulated binary patterns; and

[0017] Fig. 7 is a block diagram of a computer-readable storage medium that stores code adapted to determine depth data for captured images using density modulated binary patterns.

DETAILED DESCRIPTION

[0018] As discussed above, structured light systems, such as one-shot structured light systems and phase shifting systems, are often utilized for the reconstruction of depth data for captured images. Triangulation based one-shot structured light systems are similar to passive stereo systems, except one-shot structured light systems rely on the use of projectors for the reconstruction of depth data for captured images. Furthermore, many structured light systems rely on the generation of multiple patterns, and are unable to detect motion for a scene when the patterns are projected.

[0019] Phase shifting systems emit a series of phase shifted sinusoidal patterns.

Increasing the number of stripes in the patterns can improve measurement accuracy.

However, when sets of patterns are emitted by phase shifting systems, the scene is assumed to be static. This assumption is not valid in many practical scenarios.

Furthermore, phase shifting systems also rely on the use of projectors for the

reconstruction of depth data, as the sinusoidal patterns are grayscale.

[0020] Therefore, embodiments described herein provide for the determination of depth data for captured images using an infrared (IR) camera and a number of IR lasers including diffraction grates. Specifically, the IR lasers are used to emit density modulated binary patterns onto a scene, and depth data for the scene are reconstructed based on the density of the light spots in the density modulated binary patterns, which corresponds to particular phase information.

[0021] According to embodiments described herein, the density modulated binary patterns are designed to carry sufficient phase information without compromising the ability to reconstruct the depth data from a single captured image. However, because the carried phase information is not strictly sinusoidal, the depth data reconstructed from the phase information may contain a systematic error. Therefore, according to embodiments described herein, a pixel based phase matching technique is used to reduce the error in the depth data.

[0022] As a preliminary matter, some of the figures describe concepts in the context of one or more structural components, variously referred to as functionality, modules, features, elements, or the like. The various components shown in the figures can be implemented in any manner, such as via software, hardware (e.g., discrete logic components), firmware, or any combinations thereof. In some embodiments, the various components may reflect the use of corresponding components in an actual

implementation. In other embodiments, any single component illustrated in the figures may be implemented by a number of actual components. The depiction of any two or more separate components in the figures may reflect different functions performed by a single actual component. Fig. 1, discussed below, provides details regarding one system that may be used to implement the functions shown in the figures.

[0023] Other figures describe the concepts in flowchart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are exemplary and non-limiting. Certain blocks described herein can be grouped together and performed in a single operation, certain blocks can be broken apart into plural component blocks, and certain blocks can be performed in an order that differs from that which is illustrated herein, including a parallel manner of performing the blocks. The blocks shown in the flowcharts can be implemented by software, hardware, firmware, manual processing, or the like. As used herein, hardware may include computer systems, discrete logic components, such as application specific integrated circuits (ASICs), or the like.

[0024] As to terminology, the phrases "configured to" and "adapted to" encompass any way that any kind of functionality can be constructed to perform an identified operation. The functionality can be configured (or adapted) to perform an operation using, for instance, software, hardware, firmware, or the like.

[0025] The term "logic" encompasses any functionality for performing a task. For instance, each operation illustrated in the flowcharts corresponds to logic for performing that operation. An operation can be performed using, for instance, software, hardware, firmware, or the like.

[0026] As used herein, the terms "component", "system" and the like are intended to refer to a computer-related entity, either hardware, software (e.g., in execution), or firmware, or any combination thereof. For example, a component can be a process running on a processor, an object, an executable, a program, a function, a library, a subroutine, a computer, or a combination of software and hardware.

[0027] By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process, and a component can be localized on one computer and/or distributed between two or more computers. The term "processor" is generally understood to refer to a hardware component, such as a processing unit of a computer system.

[0028] Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable storage device or media.

[0029] Computer-readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, and magnetic strips, among others), optical disks (e.g., compact disk (CD) and digital versatile disk (DVD), among others), smart cards, and flash memory devices (e.g., card, stick, and key drive, among others). In contrast, computer-readable media (i.e., not storage media) generally may additionally include communication media such as transmission media for wireless signals and the like. [0030] Fig. 1 is a block diagram of a computing system 100 that may be used to determine depth data for captured images using density modulated binary patterns according to embodiments described herein. The computing system 100 may include a processor 102, e.g., a central processing unit (CPU), that is adapted to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the processor 102. Such instructions may be used to implement a method for

reconstructing depth data for captured images. The processor 102 can be a single core processor, multi-core processor, computing cluster, or any number of other configurations. The memory device 104 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.

[0031] The processor 102 may be connected through a bus 106 to a storage device 108 adapted to store a depth reconstruction module 110 and depth data 112 generated by the computing system 100. The storage device 108 can include a hard drive, an optical drive, a thumbdrive, an array of drives, or any combinations thereof. A network interface controller 114 may be adapted to connect the processor 102 through the bus 106 to a network 116. Through the network 116, electronic text and imaging input documents 118 may be downloaded and stored within the computer's storage system 108. In addition, the computing system 100 may transfer depth data 112 over the network 116.

[0032] The processor 102 may be linked through the bus 106 to a display interface 120 adapted to connect the system 100 to a display device 122. The display device 122 can include a computer monitor, camera, television, projector, virtual reality display, three- dimensional (3D) display, or mobile device, among others. A human machine interface 124 within the computing system 100 may connect the processor 102 to a keyboard 126 and a pointing device 128. The pointing device 128 can include a mouse, trackball, touchpad, joy stick, pointing stick, stylus, or touchscreen, among others.

[0033] The processor 102 may also be linked through the bus 106 to an input/output interface 130 adapted to connect the computing system 100 to any number of additional input/output devices. In particular, according to embodiments described herein, the input/output interface 130 may connect the computing system 100 to an IR camera 132 and a number of IR lasers 134. For example, the input/output interface 130 may connect the computing system 100 to three IR lasers 134. Each IR laser 134 may also include an associated diffraction grate that provides for the generation of a unique binary pattern.

[0034] In some embodiments, the IR camera 132 and IR lasers 134 may be included within a single imaging device 136. The imaging device 136 may be a 3D camera, gaming system, computer, or the like. In other embodiments, all or a portion of the IR lasers 134 may be externally connected to an imaging device including the IR camera 132. Further, in some embodiments, the computing system 100 itself may be an imaging device. In such embodiments, the IR camera 132 and the IR lasers 134 may reside within the computing system 100, rather than being externally connected to the computing system 100 via the input/output interface 130.

[0035] Further, the computing system 100 may include a graphics processing unit (GPU) 138. The GPU 138 may be linked through the bus 106 to the processor 102, the memory device 104, the storage device 108, the input/output interface 130, and any number of other components of the computing system 100. In various embodiments, the GPU 138 is adapted to execute instructions, such as the instructions stored in the memory device 104, either in conjunction with or independently of the processor 102. For example, the GPU 138 may execute all or a portion of the instructions that are used to implement the method for reconstructing depth data for captured images. For example, in some embodiments, the processor 102 and the GPU 138 may be used in parallel for the reconstruction of depth data for captured images. In such embodiments, the depth data may be reconstructed at a rate of around 20 frames per second (fps).

[0036] The block diagram of Fig. 1 is not intended to indicate that the computing system 100 is to include all the components shown in Fig. 1. Further, the computing system 100 may include any number of additional components not shown in Fig. 1, depending on the details of the specific implementation.

[0037] Fig. 2 is a schematic of an imaging device 200 that may be used to capture images according to embodiments described herein. In some embodiments, the imaging device 200 is a device that is externally connected to a computing system, such as the imaging device 136 described with respect to the computing system 100 of Fig. 1. In other embodiments, the imaging device 200 is included directly within a computing system, such as the computing system 100 of Fig. 1.

[0038] The imaging device 200 may include an IR camera 202 and a number of IR lasers 204. For example, in various embodiments, the imaging device 200 includes three IR lasers 204. Each IR laser 204 may include a diffraction grate 206. The diffraction grates 206 may allow each IR laser 204 to emit a unique density modulated binary pattern.

Further, the imaging device 200 may include a synchronized circuit for controlling the functioning of the IR camera 202 and the IR lasers 204. [0039] In the embodiment shown in Fig. 2, the IR camera 202 is vertically aligned with the central IR laser 204. However, it is to be understood that the imaging device 200 is not limited to the configuration shown in Fig. 2. For example, in some embodiments, the IR camera 202 may be located in line with and directly next to one of the outer IR lasers 204.

[0040] In various embodiments, the imaging device 200 may be used to capture images of a scene 208. In addition, the imaging device 200 may be used to determine depth data for one or more objects 210 within the scene 208, thus providing for the construction of a 3D image of the scene 208. Specifically, the IR lasers 204 may be used to recursively emit three unique density modulated binary patterns onto the scene 208, and the IR camera 202 may capture one image corresponding to each density modulated binary pattern. In other words, the first IR laser 204 may be activated, and the IR camera 202 may capture an image of the scene 208 as the first IR camera 202 emits its unique density modulated binary pattern onto the scene 208. The first IR laser 204 may then be deactivated, and this process may be repeated for the second and third IR lasers 204. The captured images may then be analyzed to determine depth data for the one or more objects 201 in the scene 208, as discussed further below.

[0041] The schematic of Fig. 2 is not intended to indicate that the imaging device 200 is to include all the components shown in Fig. 2. Further, the imaging device 200 may include any number of additional components not shown in Fig. 2, depending on the details of the specific implementation.

[0042] Fig. 3 is a schematic showing density modulated binary patterns 300A-C that may be used according to embodiments described herein. In various embodiments, the density of the light spots for each density modulated binary pattern 300A-C may be modulated such that each density modulated binary pattern 300A-C carries phase information. A pattern may be defined as P(r, c), where row r = 0, . . . , R— 1 and column c = 0, . . . , C— 1. With some existing systems, such as the Kinect™ system by Microsoft Corporation, the light spots are randomly and uniformly distributed in P(r, c). However, according to embodiments described herein, the numbers of light spots in different rows of the density modulated binary patterns 300A-C are determined according to a sinusoidal function, as shown below in Eq. (1).

k(r) = Round{[sin(2n^ + Θ) + 1] x a + 1} (1)

In Eq. (1), Round Q is a function for rounding a floating number to an integer. The term r is the row index. The term T is the number of rows in a sinusoidal period. The term a is a scaling factor to control k(r) as an integer from 1 to K. The three density modulated patterns 300A-C may be generated by setting Θ to—2π/3,0, or +2π/3, respectively.

[0043] In various embodiments, the pixels may be determined such that all the pixels in the same row have the same intensity. This may be accomplished using a pattern generation technique for which 1 x N pixels is defined as the basic unit for pattern generation. The term N is larger than the maximum k(r). The positions of the light spots are random in a basic unit, but are the same for all basic units in the same row. This ensures that every slide window located in the same row has the same average intensity. In addition, since the number of light spots and their positions are different in different rows, the positions of light spots in every block are still unique. The pattern generation technique may then be defined according to the following pseudocode.

Require: The number of rows in one period is T, the scaling factor

is a, and the initial phase is Θ.

for r = 1, . . . , R do

Calculate k(r) according to the Eq. (1).

Separate the row into M non-overlapping basic units. for m = 1, . . . , M do

Randomly select k(r) positions from 1 to N. The

pixels at the selected positions are light spots, end for

end for

In various embodiments, the scaling factor a may be determined based on the term N and the range of k(r), where N is an empirical parameter. The larger the value of N is, the greater number of different values k(r) can have. If the value of N is too large, e.g., when k(r) is a small integer, the density k(r)/N will be too small to establish the correspondences. Thus, in some embodiments, N may be set to a value of 8. When the value of N is decided, the maximum value of k (r) can be selected as large as 5 in order to have more densities in a period. Several different approaches may be used to generate patterns with N = 8 and k(r) E {1, . . . ,5}. For example, a pattern in which every other row contains the number of light spots calculated by Eq. (1) and the other rows are black may be used. [0044] The smaller the value of the period T is, the more accurate the depth

measurement will be. However, according to embodiments described herein, T may also have a minimum acceptable value. For N = 8 and k(r) E {1, . . . ,5}, if T is smaller than 32, not every density k(r) appears due to a rounding error. In other words, the range of k(r) is not fully used. Therefore, in some embodiments, the period T may be set to 32.

[0045] In some cases, it may be desirable to evaluate whether the densities of the generated density modulated binary patterns 300A-C can approximate the sinusoidal fringe well. The approximation is completed by the average energy in a N x N slide window. For simplicity, the case for which the signal k(r) is continuous may be considered, as shown below in Eq. (2).

If Eq. (1) is substituted into Eq. (2), and the rounding and the constant term are ignored, Eq. (2) may be rewritten as shown below in Eq. (3).

E = α8ίη ^ + 0 = ^ίη(7Γ ^)5ίη(27Γ ^ + θ) (3) In Eq. (3), the terms β and sin π^) are constant.

[0046] Therefore, the energy E in the proposed density modulated binary patterns 300A-C is a sinusoidal function mathematically. However, since the proposed density modulated binary patterns 300A-C are binary, k(r) in Eq. (1) has to be rounded to an integer. When approximating sinusoidal fringe, such rounding may result in obvious stair- wise errors in the approximation. If the errors are not handled carefully during reconstruction of the depth data, a systematic error will be introduced to the reconstructed depth data.

[0047] Fig. 4 is a graph 400 showing timing circuit 402A-C for controlling the emission of the density modulated binary patterns by the three IR lasers according to embodiments described herein. An x-axis 404 of the graph 400 may represent time. A y- axis 406 of the graph may represent the timing circuits 402A-C for controlling the emission of the density modulated binary patterns by the three IR lasers, as well as an image capture timing circuit 408 for the IR camera.

[0048] As shown in Fig. 4, each IR laser includes a different timing circuit 402A-C for emitting its respective density modulated binary pattern. Therefore, the density modulated binary patterns may be recursively emitted by the three infrared lasers. An IR camera may be used to capture each density modulated binary pattern as it is emitted by one of the IR lasers, as shown by the image capture timing circuit 408 for the IR camera.

[0049] Fig. 5A is a schematic showing a captured image 500 with an emitted pattern. Since the positions of the light spots in every small block is still unique, a block matching technique may be used to determine the disparity of every pixel between the reference image F(r, c) and the captured image I(r, c). The reference image F(r, c) is the captured image in a vertical plane with the known distance. Zero-normalized cross-correlation (ZNCC) over a block may be used as the measurement. The disparity at (r, c) may be calculated as shown below in Eq. (4).

In Eq. (4), A(r, c, = I (r + i, c + j) - I 0 and B (r', c' , = I(r' + i, c' + j) - /„. The terms / 0 and / 0 are the average intensities of / and /, respectively. The term D 1 (r, c) is the row and column disparity with the maximum ZNCC. Every pixel belongs to multiple overlapping blocks for calculating ZNCC and, thus, has multiple disparities calculated by Eq. (4). The disparity of the pixel may be decided by the block with the largest ZNCC.

[0050] Fig. 5B is a schematic showing an image 502 including the reconstructed depth data for the captured image 500 of Fig. 5 A through the block matching technique.

According to the image 502 of Fig. 5B, the depth data is reconstructed using the block matching technique described above. The discontinuity in the smooth surface of the image 502 can be clearly observed from Fig. 5B. For every captured image, an energy image may be calculated using the discrete version of Eq. (2).

[0051] Fig. 5C is a schematic showing an energy image 504 generated from the captured image 500 of Fig. 5 A. The energy images of the reference image and the captured images may be defined as E t (r, c) and E t (r, c), respectively, wherein the term i indicates different Θ. The three energy images may have the relationship shown below in Eq. (5).

Ei-i ( r > c ) = E'(r, c) + E" r, c)sin[<p(r, c) — 2π/3],

Ei (r, c) = E'(r, c) + E"(r, c)sin[<p r, c)], (5) E i+ 1 (r, c) = E' r, c) + E"(r, c)sin[<p r, c) + 2π/3]

In Eq. (5), the term E'(r, c) is the background intensity; the term E"(r, c) is the sinusoidal amplitude; and the term φ(τ, c) is the phase image to be solved. In addition, the term E^^r, c)— E i+1 (r, c) is the average intensity in a small area. The phase φ(τ, c) can be obtained by solving Eq. (6).

r ^3 \Ε;_ Λ (r,c)-£\ +1 (r,c)l .

φ(τ, c) = arctan{ , ' , , J (6)

[0052] For typical phase shifting systems, the depth data can be directly calculated from φ(τ, c) . However, since the pattern used according to embodiments described herein includes a stair-wise error for approximating the sinusoidal phase, the depth data may not be calculated directly from φ(τ, c) . Instead, a pixel based phase matching technique may be used to calculate the depth data. First, φ(τ, c) may be calculated from the reference energy images Ei (r, c). For every phase φ (τ, c), the most matched φ (τ— Ar, c) within one period may be determined. The disparity d 2 (r, c) in one period may then be calculated according to Eq. (7).

ί . , <b(r-Ar,c)-(b(r,c) . , -r , .

Ar + — , 0(r, c) < 0(r - Ar, c)

φ (Τ,€) -φ (Τ-ΔΤ,€) , , ~L f

Ar - - -——— , 0(r, c) > 0(r - Ar, c)

In Eq. (7), Ar is the integer row disparity. The other term in Eq. (7) is the fractional row disparity by the linear interpolation.

[0053] The ambiguity problem may also be solved. If there are M periods in the captured images, the period with the disparity d 2 (r, c) may be identified. Identifying the period with the disparity d 2 (r, c) may be relatively easy since the positions of the light spots are different for every period. The ZNCC values for m = 1, . . . , M can be determined, and the period m with the largest ZNCC may be selected. Finally, the disparity can be reconstructed according to Eq. (8).

D 2 (r, c) = d 2 (r, c) + m x L (8) In Eq. (8), the term L is the number of rows in one period in the captured images.

[0054] Fig. 5D is a schematic showing an image 506 including depth data for the image 500 of Fig. 5 A that may be reconstructed from three energy images according to embodiments described herein. Specifically, the depth data may be reconstructed using the pixel based phase matching technique described above.

[0055] Although using the embedded phase to reconstruct the depth data for objects in a scene may produce high quality results, the technique utilizes at least three captured images and cannot handle moving objects in the scene. By contrast, the block matching technique for reconstructing depth data only utilizes one captured image and is able to handle moving objects in the scene. Therefore, according to embodiments described herein, the block matching technique may be used to determine depth data for a scene with moving objects.

[0056] According to the block matching technique, the depth data D 1 (r, c) and D 2 (r, c) may be integrated according to the motion detected in the scene. Changes in intensity may be used to detect if there is any motion in the scene. The current captured image may be compared with the previous third image because their patterns are the same. If the average intensity change in a region is larger than a threshold, the region may be marked as moving. Depth data D 1 (r, c) may then be adopted in the moving regions, while depth data D 2 (r, c) may be adopted in the stationary regions.

[0057] Fig. 6 is a process flow diagram of a method 600 for depth acquisition from density modulated binary patterns. The method 600 may be implemented by the computing system 100 and/or the imaging device 200 discussed with respect to Figs. 1 and 2, respectively. The method 600 begins at block 602, at which a number of images for a scene are captured using an IR camera and a number of IR lasers including diffraction grates. Each image includes a density modulated binary pattern carrying phase

information.

[0058] In various embodiments, three images are captured using the IR camera and three IR lasers including diffraction grates. However, it is to be understood that any suitable number of images may be captured using any suitable number of IR lasers according to embodiments described herein.

[0059] At block 604, pixel based phase matching is performed for the images to determine depth data for the scene based on the phase information carried by the binary patterns. This may be accomplished by extracting the phase information from the binary patterns and reconstructing the depth data for the scene based on a combination of the phase information for the binary patterns. In addition, in various embodiments, phase ambiguity within the images is corrected prior to performing pixel based phase matching for the images. The phase ambiguity may be corrected based on a local uniqueness of each binary pattern.

[0060] In some embodiments, the scene includes one or more moving objects. In such embodiments, block matching is performed to determine depth data for the portion(s) of the scene including the one or more moving objects. [0061] In various embodiments, the depth data is used to determine the absolute depth for the scene. In addition, the depth data may be used to generate a three-dimensional image (or video) of the scene.

[0062] The process flow diagram of Fig. 6 is not intended to indicate that the blocks of the method 600 are to be executed in any particular order, or that all of the blocks are to be included in every case. Further, any number of additional blocks not shown in Fig. 6 may be included within the method 600, depending on the details of the specific implementation.

[0063] The method 600 may be used for a variety of applications. In some embodiments, the method 600 may be used to provide 3D images (or videos) for gaming applications. For example, the method 600 may be implemented within the Kinect™ system by

Microsoft Corporation. In addition, in some embodiments, the method 600 may be used to provide 3D images (or videos) for telepresence applications or other virtual reality applications.

[0064] Fig. 7 is a block diagram of a computer-readable storage medium 700 that stores code adapted to determine depth data for captured images using density modulated binary patterns. The computer-readable storage medium 700 may be accessed by a processor 702 over a computer bus 704. Furthermore, the computer-readable storage medium 700 may include code configured to direct the processor 702 to perform the steps of the current method.

[0065] The various software components discussed herein may be stored on the computer-readable storage medium 700 as indicated in Fig. 7. For example, an image capture module 706 may be adapted to capture images of a scene using an IR camera and a number of IR lasers including diffraction grates. In addition, a depth reconstruction module 708 may be adapted to reconstruct depth data for the captured images based on binary patterns within the images.

[0066] The block diagram of Fig. 7 is not intended to indicate that the computer- readable storage medium 700 is to include all the components shown in Fig. 7. Further, the computer-readable storage medium 700 may include any number of additional components not shown in Fig. 7, depending on the details of the specific implementation.

[0067] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.