Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INDEPENDENT THREAD VIDEO DISPARITY ESTIMATION METHOD AND CODEC
Document Type and Number:
WIPO Patent Application WO/2013/173106
Kind Code:
A1
Abstract:
A method for real-time disparity estimation of stereo video data receives sequence of frames of stereo video data. Image-based disparity estimation is initially conducted to produce initial disparity estimates, and the disparity estimates are refined in a space-time volume. The algorithm produces disparity via multi-thread process in which an output is independent of the input for each step of the process.

Inventors:
NGUYEN TRUONG (US)
CHAN HO (US)
JUANG JASON (US)
Application Number:
PCT/US2013/039717
Publication Date:
November 21, 2013
Filing Date:
May 06, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NGUYEN TRUONG (US)
CHAN HO (US)
JUANG JASON (US)
UNIV CALIFORNIA (US)
International Classes:
H04N13/00; H04N7/26
Foreign References:
US8179448B22012-05-15
US7324687B22008-01-29
US20110032341A12011-02-10
US20110176722A12011-07-21
US20120007954A12012-01-12
Attorney, Agent or Firm:
FALLON, Steven P. (Burns & Crain Ltd.300 South Wacker Drive,Suite 250, Chicago Illinois, US)
Download PDF:
Claims:
CLAIMS

1. A method for disparity estimation of stereo video data, comprising:

receiving a sequence of frames of stereo video data;

initially conducting image-based disparity estimation to produce initial disparity estimates, wherein said initial disparity estimation produces disparity estimates via a thread process in which an output is independent of the input for each step of the process;

refining disparity estimates in a space-time volume.

2. The method of claim 1, wherein said refining comprises minimizing disparity in the space time volume.

3. The method of claim 2, wherein the initial disparity estimation further comprises smoothing the initial disparity estimates.

4. The method of claim 2, wherein said minimizing comprises the minimization of two terms:

^ iillf - gll! + HDfl^

where 11 f— g 111 is a measurement of the difference between the optimization variable f and the observed disparity g

6. The method of claim 2, wherein said minimizing comprises minimizing a variable:

^ iillf - gll! + HDfl^ where ||Df|| 2 is defined as:

and Dx, Dy, and Dt are forward difference operators and (βχ βγ ?t) are scaling parameters.

7. The method of claim 6, wherein said minimizing comprises solving an equivalent constrained minimization problem.

8. The method of claim 1, wherein the initial disparity estimation is conducted via census transformation and occlusion handling and filling that produces a result independently for each pixel.

9. The method of claim 8, wherein said refining enforces sparsity of a spatial and temporal gradient in the space-time volume

10. The method of claim 1, wherein the initial disparity estimation is conducted via a combination of pixel ordering and color intensity to determine a correspondence map that minimizes error for a distance function.

1 1. The method of claim 1, wherein said initially conducting comprises converting pixels into gray-scale and then construct an m x m, sliding window over each pixel, then for each m x m. block, conducting a census transform that maps intensity values to a bit vector by performing a boolean comparisons between a center pixel intensity and its neighborhood pixels, and if the surrounding pixel has a lower value than the center pixel, the bit is set to 0; otherwise the bit is set to 1.

12. The method of claim 1 1, wherein a cost between two blocks is determined and then a sampling insensitive absolute is used to compute the color cost between pixels by considering the subpixels and then generating an initial disparity estimate as a three-dimensional error map based upon a range of disparity at each pixel.

13. The method of claim 12, further comprising conducting cross- based aggregation on the initial disparity estimates to enhance spatial smoothness.

14. The method of claim wherein said initial disparity estimation and refining are conducted using multiple threads in a GPU such that blocks to be processed are distributed randomly to idle processor cores.

15. The method of claim 12, wherein each block being processed utilizes shared memory and separate registers for each thread being processed.

16. The method of claim 1, implemented by computer code stored on a non-transient medium.

17. The method of claim 13, implemented by a video codec and

GPU.

18. A method for disparity estimation of stereo video data, comprising:

receiving a pair of video sequences l / and ¾ΛΧ) where for images in the video sequences, conducting a census transform to determine a cost

CostCT(/L(x), /R

defining a distance function as: dist(/t(x), 7R(x)) = (1 - exp (- e- W-M) + χ _ and at each pixel location (x,y), a range of disparity d 6 {dmin, ... , dmax} is tested and a three-dimensional error map

ε(χ; d ) = dist(/L(x + d (x)), /R(x))

is generated;

conducting cross based aggregation on the three dimensional error map and setting disparity that minimizes cost at each pixel location xp;

conducting occlusion handling and filling on the error map using a left right consistency check to detect unreliable pixels; and

refining disparity maps using following minimization problem

minimize

llf - gll! + llDfH

f 2

where g = vec(g(x, y, t)) and f = vec(/(x, y, t)) are the initial disparity and the optimization variable, respectively, and term ||Df||2 denotes the total variation norm. refining a group of disparity estimates in a space-time volume.

Description:
INDEPENDENT THREAD VIDEO DISPARITY

ESTIMATION METHOD AND CODEC

STATEMENT OF GOVERNMENT INTEREST

This invention was made with government support under grant no. CCF- 1065305 awarded by the National Science Foundation. The government has certain rights in the invention.

FIELD

A field of the invention is video encoding and decoding. Example applications of the invention include the encoding, storage, transmission, decoding and stereo video data, including 3D video processing. Methods and codecs of the invention are particularly application to view synthesis for multi-view coding that uses disparity estimation, 3D video conferencing, real time 3D video coding, real time object detection for surveillance, real time video tracking using stereo data, and real-time stereo to multiview rendering.

BACKGROUND

Disparity estimation is a necessary component in stereo video processing. Video disparity is used for 3D video processing. In a two-camera imaging system, disparity is defined as the vector difference between the imaged object point in each image relative to the focal point. It is this disparity that allows for depth estimation of objects in the scene via triangulation of the point in each image. In rectified stereo, where both camera images are in the same plane, only horizontal disparity exists. In this case, multiview geometry shows that disparity is inversely proportional to actual depth in the scene.

However, existing disparity estimation methods are largely limited to stereo images and have been shown effective or tailored for specific datasets such as Middlebury Stereo Vision Database. These methods are typically slow, and usually can only be applied to high quality images with high contrast, rich color and low noise. These methods typically require off-line processing and cannot be implemented in real time. Therefore, simply extending existing image- based methods to estimate video disparities is insufficient because video disparity does not only require spatial quality, but also temporal consistency. Additionally, real-time processing is required in many practical video systems.

Estimating disparity has been extensively studied for images. The existing image-based methods are ill-suited to video disparity estimation on a frame-by-frame basis because temporal consistency is not guaranteed. Using these methods for video disparity estimation often leads to poor spatial and temporal consistency. Temporal consistency is the smoothness of the disparity in time. If a video disparity is temporally consistent, then an observer will see flickering artifacts. Temporally inconsistent disparity degrades the performance of view synthesis and 3D video coding.

Existing disparity estimation methods are also tuned for specific datasets such as Middlebury stereo database. See, D. Scharstein and R. Szeliski, "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms" International Journal of Computer Vision Proceedings, 47:7-42 (April 2002). Such methods tend to perform poorly when applied to real video sequences. Many common real video sequences have lighting conditions, color distributions and object shapes that can be very different from the images on Middlebury stereo database. For methods that require training, applying such methods to real videos is almost impossible and at least is highly impractical from a perspective of speed of execution and complexity of computation.

Existing image-based disparity estimation techniques may be categorized into one of two groups: local or global methods. Local methods treat each pixel (or an aggregated region of pixels) in the reference image independently and seek to infer the optimal horizontal displacement to match it with the corresponding pixel/region. Global methods incorporate assumptions about depth discontinuities and estimate disparity values by minimizing an energy function over all pixels using techniques such as Graph Cuts or Hierarchical Belief Propagation. Y. Boykov et al, "Fast Approximate Energy Minimization via Graph Cuts," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222-1239 (February 2004); V. Kolmogorov and R. Zabih, "Computing Visual Correspondence with Occlusions via Graph Cuts," International Conference on Computer Vision Proceedings, pp. 508-515 (2001). Local methods tend to be very fast but global methods tend to be more accurate. Most implementations of global methods tend to be unacceptably slow. See, D. Scharstein and R. Szeliski, "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms," International Conference on Computer Vision Proceedings, vol. 47, pp. 7^2 (April 2002).

Attempts to solve stereo-matching problems for video have had limited success. Difficulties encountered have included the computational bottleneck of dealing with multidimensional data, lack of any real datasets with ground-truth, and the unclear relationship between optimal spatial and temporal processing for correspondence matching. Most have attempted to extend existing image-methods to video and have produced computational burdens that are impractical for most applications.

One attempt to extend the Hierarchical Belief Propagation method to video extends the matching cost representation to video by a 3 -dimensional Markov Random Field (MRF). O. Williams, M. Isard, and J. MacCormick, "Estimating Disparity and Occlusions in Stereo Video Sequences," in Computer Vision and Pattern Recognition Proceedings (2005). Reported algorithmic run times were as high as 947.5 seconds for a single 320 x 240 frame on a powerful computer, which is highly impractical.

Other approaches have used motion flow fields to attempt to enforce temporal coherence. One motion flow field technique makes use of a motion vector field. F. Huguet and F. Devernay, "A Variational Method for Scene Flow Estimation from Stereo Sequences," in International Conference on Computer Vision Proceedings pp. 1-7 (2007). Another makes use of See, M. Bleyer and M. Gelautz, "Temporally Consistent Disparity Maps from Uncalibrated Stereo Videos," in Proceeings of the 6 th International Symposium on Image and Signal Processing (2009).

One computationally practical method is a graphics processing unit (GPU) implementation of Hierarchical Belief Propagation that relies upon locally adaptive support weights. See, C. Richardt et al, "Realtime Spatiotemporal Stereo Matching Using the Dual-Cross-Bilateral Grid," in European Conference on Computer Vision Proceedings (2010); K. J. Yoon and I. S. Kweon, "Locally Adaptive Support- Weight Approach for Visual Correspondence Search," in Computer Vision and Pattern Recognition Proceedings (2005). This method integrates temporal coherence in a similar way to Williams et al. (O. Williams, M. Isard, and J. MacCormick, "Estimating Disparity and Occlusions in Stereo Video Sequences," in Computer Vision and Pattern Recognition Proceedings (2005)) and also provides a synthetic dataset with ground- truth disparity maps. Other methods that are practical require specific hardware or place data constraints. See, J. Zhuet al, "Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps," in Computer Vision and Pattern Recognition Proceedings (2008) pp.1-8; G. Zhang, J. Jia, T. T. Wong, and H. Bao, "Consistent Depth Maps Recovery from a Video Sequence," PAMI, vol. 31, no. 6, pp. 974-988 (2009). SUMMARY OF THE INVENTION

An embodiment of the invention is a method for disparity estimation of stereo video data. Image-based disparity estimation is initially conducted to produce initial disparity estimates, and the initial disparity estimation produces disparity estimates via a thread process in which each output is generated independently. Disparity estimates are refined in a space-time volume.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a preferred method and device of the invention for independent thread video disparity estimation;

FIG. 2 illustrates an example cross-based aggregation time disparity estimation that is part of an initial disparity estimation in a preferred embodiment of the invention;

FIG. 3 illustrates an example occlusion handling that is part of an initial disparity estimation in a preferred embodiment of the invention; and

FIG. 4 shows a preferred processing architecture in GPU to achieve real-time disparity estimation using Compute Unified Device Architecture (CUDA); and

FIG. 5A-5C is a flowchart illustrating preferred steps for handling kernels in a GPU executing a method of the invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

An embodiment of the invention is a method that can provide disparity estimation using independent threads. Left and right stereo views are received from a stereo sensor system or other stereo video sources. The method conducts an initial disparity estimation followed by a refinement of the initial disparity estimation. In the initial estimation, a shape adaptive aggregation is applied to compute the initial image-based disparity map, which provides high matching accuracy. The preferred initial estimation is an image-based disparity estimation that combines census transform and color, cross-based aggregation, and occlusion handling and filling. Other known methods can also be used for initial disparity estimation, and the space-time minimization of the invention can be used with any other methods. However, experiments conducted show that the shape adaptive aggregation method provides the best tradeoff between quality and speed. Refinement is conducted with a space-time minimization that enforces spatio- temporal consistency between frames. The initial disparity estimation and refinement are constructed to enable parallel processing, and permit independent thread processing. In each step of the preferred method, the output is independent of the input, which therefore allows multithread processing. For example, when calculating census transform for pixel (x,y), the resulting bit string is not dependent on the resulting bit string for other pixel (x',y')- This enables preferred embodiments to operate in real-time with appropriate hardware and thread processing.

Temporal consistency (and spatial consistency) is improved by solving a space-time minimization using a spatio-temporal total variation (TV) regularization. Treating the initial video disparity as a space-time volume, the space-time TV enforces sparsity of the spatial and temporal gradient in the space- time volume. The minimization problem is solved using an augmented Lagrangian method. Experimental results verify that the method performs very well in comparison to existing methods known to the inventors. Methods of the invention produce spatially and temporally consistent results in real time, whereas prior real time methods provide only spatial consistency and can suffer from temporal inconsistencies.

A preferred embodiment codec of the invention can be implemented with standard graphics processors and achieve real-time performance. An example system to demonstrated the method of the invention was implement in dual nVidia graphics cards (GTX 580). A preferred method of the invention has been demonstrated to be robust to various types of inputs that are non-ideal. Many prior methods are effective only with idealized video sets, and many are only suitable for image processing. The present method is robust for inputs such as for compressed movie trailers, stereo footage from DaVinci surgery system, and stereo video captured from everyday life.

Preferred embodiments of the invention will now be discussed with respect to the drawings. The drawings may include schematic representations, which will be understood by artisans in view of the general knowledge in the art and the description that follows. Features may be exaggerated in the drawings for emphasis, and features may not be to scale.

A preferred method of the invention will be discussed with respect to FIG. 1. The method can be implemented, for example, via computer code stored on a non-transient medium. It can be implemented in hardware or firmware, and as a codec, in various video capture and processing devices. Example devices include augmented reality systems, human computer interaction devices, and gaming or entertaining devices such as Microsoft Kinect. Additional particular example devices include HTC EVO 3D and LG 3DTV, which can use the disparity estimation of the invention to create different depth sensation on the stereo displays of these devices. Another example device is the Alioscopy multiview display, which could use the disparity estimation of the invention to synthesize multi-virtual views.

In the method of FIG. a stereo video (left sequence 10 and right sequence 12) is input to an image-based disparity estimation algorithm 14. The initial estimation is done frame by frame but using the right and left frame together. This estimation algorithm preferably includes minimization via a method such as census transformation and/or color intensity methods, cross based aggregation and occlusion handling and filling. The occlusion handling and filling is unique in that pixels are made independent so that processing can proceed in parallel. The occlusion handling and filling is not as accurate as some methods that have input-output dependence, but subsequent spatio-temporal refinement 16 relieves and compensates for a lesser amount of accuracy. The spatio-temporal refinement enforces sparsity of the spatial and temporal gradient in a space-time volume, and the minimization can be expediently solved with an augmented Lagrangian method.

Initial Disparity Estimation 14

For a pair of video sequences ^ x .) and where x

the goal of disparity estimation is to determine the correspondence map f H x ) such that the error

£ = dist (I L (x + d (x)), I R (x))

is minimized for some distance function ·) . The distance function can be defined globally (i.e., over the entire image), or locally (i.e., partition the image into blocks so that varies for different blocks). For each class, it can further be divided into pixel ordering methods (e.g., using census transform) and color intensity based methods (e.g., using sum of absolute difference). The distance function used is preferably a combination of census transform and color intensity methods.

Census Transform and Color Intensity Method

The census transform (CT) is a non-parametric local transform. It relies on the relative ordering of local intensity values, instead of the color intensity values. See, e.g., R. Zabih and J. Woodfill, "Non-Parametric Local Transforms for Computing Visual Correspondence," European Conference on Computer Vision 1994, pp. 151-158 (1994). For a given image, first convert the color pixels into gray-scale, Taking m=3 as an example, the following window is census transformed as Given two census transformed blocks (each of size rn x m), the hamming distance, which is the logical XOR operation between the two blocks following by the sum of non-zero entries, can be used to determine the cost between them. For example,

The cost can be written as:

Cost cx (/ L (x), / R (x)) = hamming(census(/ L (x)), census / L (x)))

The Sampling insensitive absolute difference (BT) can be used to compute the color cost between pixels by considering the subpixels. For example, when computing cost between two pixels IL(X,Y) and I R (x,y), it calculates the absolute color difference between (I L (x-0.5,y), I R (x,y)), (I L (x,y), lR(x,y)), (I L (x+0.5,y), I R (x,y)), (I L (x,y), I R (x-0.5,y)), (I L (x,y), I R (x+0.5,y)) and select the minimum.

The cost can be written as Cost BX (/ L (x), / R (x)) = BT(/ L (x),/ R (x)) Both the cost of census transform and color can be normalized to 0~1 by an exponential function. The distance function also needs to be constructed such that error values are calculated independently. Therefore, the distance function dist(y) is defined as

dist(/ l ( x), / R (x)) = i (1 - exp (- C∞1 "^' ri¾l ) + 1 - / Cost BT (; 1 ,(x),; li (x))v

V ΧβΤ dist(/ L (x), / R (x)) is not dependent on dist(/ L (x + x'), / R (x + x')) , therefore, for every x, dist(/ L (x), 7 R (x)) can be computed at the same time independently. At each pixel location (x,y), a range of disparity d 6 {d min , ... , d max ] is tested and a three-dimensional error map ε(χ; d ) = dist(/ L (x + d (x)), 7 R (x))

is generated. This is three dimensional in the sense of (x,y,d) instead of (x,y,t), so it is still frame by frame at this point in the initial disparity estimation. When correspondence between left and right views are found for a pixel (x,y), there will be multiple candidates d. The goal is to find the d that minimizes the cost. After we select our d we have a initial disparity map, and then we apply our spatio- temporal refinement.

Cross-Based Aggregation

The result of the distance function above is a three-dimensional map of errors where each error value is calculated independently. This independence is important for leveraging parallel processing. While it is possible to infer the disparity (at each pixel (x,y)) by picking the disparity that return the smallest error, i.e.,

argmin , ,

d(x) = 6 ε(χ; d)

d

the result can be noisy as each pixel is independent of its neighborhoods. Cross- based aggregation is applied to enhance spatial smoothness. Other smoothing enhancements can be also used. However, typical methods such as fixed block aggregation can cause edge smearing effects, locally adpative weight methods can be computationally expensive because aggregating would be caused in two dimensions at the same time. Experiments showed the cross based aggregation provide best trade-off in speed and quality.

Cross-based aggregation starts with a color image. For simplicity of explanation and without loss of generality, this explanation focuses on the left image, and denotes it as 7(x) . At each color pixel x p = (x p , yp , the goal is to define a neighborhood i/(x p ) such that for all pixels in i/(x p ) the colors are similar. To this end, first define the top margin of i/(xp) to be the nearest vertical pixel x - such that the color difference is less than a threshold τ:

vp " = min{x (? |||/(xp)— l{x q ) ll < T > x q e positive vertical direction }

Where ||. || is the maximum of the three color components of/(x). Similarly, the bottom margin of i/(x p )is defined as:

vp " = min{x (? |||/(xp)— l{x q ) ll < T > x q e positive vertical direction }

The top and bottom margin defines a vertical strip of pixels labeled as {v ~ , ...,v + }.

Vp = min{x (? |||/(xp)— l{x q ) ll < T > x q e negative vertical direction }

FIG.2 illustrates an example cross-based aggregation. For each pixel x q 6 {vp, ...,Vp], calculate the horizontal margins with respect to each x q as follows: q =™ m (x r |||/(x (? )— /(x r ) lloo< T,x r 6 positive horizontal direction} q = n (x r |||/(x (? )— /(x r ) lloo< T,x r 6 negative horizontal direction}

FIG.2 graphically illustrates the definitions. The horizontal margin is defined for each x q 6 {vp, ...,vp " }. Thus, there is a set of horizontal margins defined by each x q 6 {vp , ... , v + } . The union of all the horizontal margins defines the neighborhood i/(xp).

With the aid of cross-based aggregation, a non-uniform average of the three-dimensional error map is calculated. Specifically, for each pixel location Xp = (xp,y p ) take the average of error value ε(χ; d) within the neighborhood t/(xp):

where |i/(x p ) | is the cardinality of the set i/(x p ). The disparity can be determined by picking the d's that minimizes the cost at each pixel location x p :

Occlusion Handling and Filing

The cross-based aggregation returns disparity maps for the left and right images independently. To make sure that both left and right disparities are spatially consistent, perform a left-right consistency check to detect unreliable pixels. These unreliable pixels are those having different disparity on the left and right images.

FIG. 3 illustrates an example occlusion handling. In FIG. 3, for each unreliable pixel (x,y), the cross-based aggregation method generates a neighborhood for (x+s,y) as shown for the yellow region in FIG. 3, where (x+s.y) is the left most reliable pixel. The white region indicates unreliable (occluded) region, dark grey region is background, and the light grey region is the foreground. All reliable pixels within the neighborhood vote for the candidate disparity value at (x,y). The unreliable pixel at (x,y) is filled with the majority of the reliable pixel in the voting region. By this method of the invention, the center pixel in the method is not automatically selected as the center pixel for occlusion handling. Instead, a first occluded pixel is selected to define the neighborhood.

In FIG. 3, a left disparity map is used as an example, where occlusion pixels (white) appears at the right side of the background, left side of the foreground. (In the right image, occlusion pixels would appear at the left side of the background, right side of the foreground). Only the occlusion pixels are selected and needed to be processed. For an arbitrary occlusion pixel (x,y), the method goes to its left neighbor pixel to see whether it is an non occluded pixel. If it is occluded, continue to the left. If it is non-occluded, the procedure stops. In FIG 3, the pixel being examined, the process went left for s pixels. A neighborhood is constructed based on the cross-based aggregation method on point (x+s,y). Every non occluded pixel within that region votes. The majority disparity values in that region is assigned to occlusion pixel (x,y). FIG. 3 presents an ideal situation where the majority is obviously the background. Therefore the white region will be filled with the background. The resulting filling for every occluded pixel (x,y) is independent of other filling result for occluded pixel (x+x',y+y'). This permits parallel processing.

Prior window based voting methods have been based on (x,y) instead of (x+s,y). The amount of non occlusion pixels in the window constructed based on (x,y) will be significantly smaller than the window constructed based on (x+s,y). Therefore, such methods are much more sensitive to outliers due to fewer votes, and resulting in inaccuracy.

Other methods such as plane fitting use RANSAC for multiple disparity planes, which is very computationally expensive. It is an iterative process (in the order of 100 per plane) that treats the occlusion pixel as outliers and find the plane that minimize the error for non occlusion regions, and fill the occlusion pixel as if it is on the plane.

Disparity Spatio-Temporal Refinement Real-Time

Refinement 16 involves solving the following minimization problem min

where g = vec^^ y, t)) and f = vec(/(x, y, t)) are the initial disparity and the optimization variable, respectively. The operator D is the spatial-temporal gradient operator that returns horizontal, vertical, and temporal forward finite difference of f, i.e.,

To control relative emphasis on each direction, we modify

D = [β χ Όχ, βγΌγ, β,-O Y where (/? x ,/? y ,/? t ) are scaling parameters. These could be defined by users or set according to particular devices. If no user input is detected, betas can use a default setting, e.g.. Defaults are ? X; = 1, β γ = 1, ; β χ = 10. These values can be determined and optimized experimentally for different types of video and sensor devices. . The term ||DF|| 2 denotes the total variation norm, defined as

||Df|| 2 « J/¾|D x f| 2 + /? 2 |D y f| 2 + ? t 2 |D y f|

An augmented Lagrangian method solves the above minimization problem by the following steps (at k-th iteration). See, Chan et al., "An Augmented Lagrangian Method for Video Restoration", IEEE International conference on acoustics, speech and signal processing (ICASSP), 2011 f fc+ i = arg n llr* - f+gll 2 + (Pr/2)I -Df|| 2 +z[f + y[Df, fc+i = /?Df fc+1 + vk+l

Ufc+i = max{|v fc+1 | -^,θ}

pr' ~ ) ||v fe+1 || 2

rfc + l ~ 7 Po ' °1 ' ' sign ~ Po Zk i)'

Yk+1— z k Pr( u fc+1 Dffc+l)

Yk+1 = z k ~ Po( r k+l ~ ffc+1 + In the iterative method described above, the first problem (known as the f-subproblem) can be solved using Fast-Fourier Transform, as discussed in Chan 2011, et al, supra. It is also possible to solve the problem using the following iteration (known as the Jacobi Iteration) ,j,k = C -l,j,k + +l,j,k) + y( i,j-l,k + ,j+l,k)

+C 2 [ u f-iJfc - uj >k ) + /? y ( u u-i,fc - u u,J

+c 3 Myl j , k - y¾, fc ) + β ν ^. 1 - y j,k )

+ Cl(.9i,j,k + r i,j,k) ~ C-$ z i,),k>

where

r — P r — Pr

Po+K' Po+K "

C 3 = -i- , K = 2p r (/¾ + β} + β ).

The method described above requires multiple frames for the minimization. In another preferred embodiment of the invention, the minimization process is accomplished via a technique that does not multiple frames. Thus, the minimization problem can be solved faster.

The minimization is modified such that the minimization is argmin

— ^— ll f-bHi + ||D. ll2 where

A is (^) and b is ( f ) which can be solved using augmented Lagrangian + Jacobi Iteration. Here, f is the desired disparity, and g is the initial disparity. LI norm is chosen in the objective function as it preserves the piecewise constant structure of the disparity f. The second term is the total variation of f in space. The total variation norm is used to preserve the edges and suppress the noise. The third term is the difference between the current estimate and the previous solution, which enforces temporal consistency. B is a diagonal matrix whose (i,i)th element = 1 if there is no motion at pixel i, and = 0 otherwise.

An example solution is provided as follows:

The augmented Lagrangian function is given by

ft

L(f, r, u, y, z) = /i||r||j + ||u||2— z 3 (r Af + b) + -^j|r Af + b|||

- y J Pr

(u - Df ) + it— Df

2

This can be broken into sub-problems:

f-sub: argminz r Af + ~||r Af + b|| + y T Df + ~||u— Df

2 2

u-sub argmm 2 - y 11 11— Df

r-sub

Z " ^ = z fc - p 0 (r fc+i -Af* +A +b)

F- SUB Problem -

Normal Equation:

( 0 A T A ~h r D T D)f ~ p 0 A T (b + r)™ A T z + D T ( r u - y) Express this explicitly using D

(p 0 (l + « 2 B T B) 4- > r D T D)f ^ / 0 (g + « 2 B T f 4- ti + ft-B T r 2 ) - ! i . + κ

Jacobi Method -

/V

Ί + 1:·.·.

U-Sub Problem

Using the Shrinkage Formula

Uj, -raaxj*/ +

R-Sub Problem r = max{|Af— b + z)

Example Parameter Selection Every variable mentioned in this section is a matrix. Every operation is element wise. For example, f-g means for every element f(x,y)-g(x,y), where x ranges from 0~matrix width, y ranges from 0~matrix height. Obviously every element (pixel) is independent in this operation. Another example Bf means for every element B(x,y)*f(x,y), which is obviously also independent. All the computation in this section is just a combination of elementwise +-*/.

Parallel GPU Computing with Kernels

FIG. 4 shows a preferred processing architecture in GPU to achieve real-time disparity estimation using Compute Unified Device Architecture (CUD A). Methods of the invention leverage GPU cores. Typically GPUs provide numerous cores. The NVIDIA GTX 580 is an old model that has 512 cores in each. That means we are using 1024 cores to do real time. The newest NVIDIA GTX 680 has 1536 cores. CUDA exploits the massive parallel computing ability of GPU by invoking kernels. In FIG. 4, a programmer specified kernel 20 contains multiple number of blocks 22 N , and each block contains multiple threads 24 N . "N" is an independent moniker for each separate use in FIG. 4, and as indicated in FIG. 4, does not indicate that the number of blocks is equal to the number of threads. The programmer specifies how many blocks and threads are allocated to the kernel. The optimal decision is affected by different models of GPU. For example, GTX 260 is only capable of invoking 512 threads per block, while GTX 580 is capable of invoking 1024 threads per block. If we use the old settings for GTX 260 allocating only 512 threads per block on GTX 580, the speed will be much slower than allocating 1024 threads per block.

CUDA distributes blocks randomly to idle streaming processor cores to compute. Every block 22 N has access to global memory 26, but the read/write calls for the global memory 26 should be minimized. Accessing it is slow because it is uncached. Each block has its own shared memory 28 N and registers 30 N for each thread. Read/Write for the shared memories 28 N and registers 30 N is very fast, but the size is very limited.

Clever usage for both the global memory and shared memory /register is therefore crucial for maximize speed up. The initial disparity estimation 14 and real-time refinement 16 are decomposed into various number of kernels and executed in parallel fashion. Dual GPU can be used to speed up the process.

Prior work (discussed below in experimental results as realtimeBP) does message passing as follows. The message passing being used is M_right(x,y)=DataCost(x,y)+M_up(x,y)+M_down(x,y)+M_right(x- 1 ,y). For the right message at pixel(x.y), M_right(x,y) has to wait for M_right(x- 1 ,y) to be computed first otherwise it can't be computed. Therefore, this method cannot accomplish multi-thread computing on the whole image as can the present method. No such dependency happens in the equations above and so the output is completely indepedent of the input

FIGs. 5A-5C illustrate a flow for real time execution of the FIG. 1 method using the census transform, cross aggregation, and occlusion handling and filling. In FIGs. 5A-5C, a left image is input and the output is the horizontal left cross, which creates the horizontal cross construction kernel 30. With the left image a vertical left cross is output to create the vertical cross construction kernel 32. Comparable kernels are constructed using the right images in 34 and 36. Census transform kernels are constructed using left and right images in 38 and 40. The next is a loop that runs the distance function kernel 42, horizontal cross aggregation kernel 44 and vertical kernel 46 to produce an output three- dimensional error map.

The select disparity kernel 48 uses the three-dimensional error map and outputs left and right disparity maps, which are cross-checked and output with occlusion labeling on one via the cross-check kernel 50. Voting kernels 52 and 54 output results for selecting pixels for occlusion handling, and the filling kernels the end initial disparity estimate from 14 in FIG. 1.

FIG. 5C illustrates the sub-problems for the solution that is specified above for the Lagrangian + Jacobi Iteration. The kernels 60-70 are labeled according to the specific sub-problems above. Typically, the flow of FIG. 5C will run about 20 times.

Experimental Data

In a preferred experimental implementation, the input stereo images are split into the top half and the bottom half, and apron pixels are assigned to each half of the image to reduce error in borders. This is to address the problem cause by dual cores. Two GPUs cannot communicate with each other going through CPU, and going through CPU is slow. Therefore, a preferred method does not allow them to communicate during the disparity estimation process. Splitting the images allows treating it as each GPU computing a different image independently. For example, assume splitting of the image without assigning apron pixels, which is (0~w,0~h/2) and (0~w,h/2~h). During aggregation in the initial disparity section, pixel (any x, h/2) will have no information from (any x, h/2+a positive number), where pixel (any x, h/2) will have information from (any x, h/2+a positive number) if the image wasn't split. Apron pixels is assigned in order to reduce this effect.

The system achieveed around 25 fps with the GPU GTX 580 from NVIDIA for 1920x1080, which has 512 streaming processors cores.

Among all 119 reported methods on the Middlebury Stereo database, Table 1 below compares the result of the present methods with top ranked realtime and semi real-time methods. The performance of the methods are listed in ascending order of the average percentage of bad pixels (Ave). The present method ranked first in real-time (20+fps) methods, and rank ninth overall. Table 1

ADCensus - X. Mei et al, "On building an accurate stereo matching system on graphics hardware." GPUCV 2011.

Cost Filter - Christoph Rhemann et al, "Fast cost-volume filtering for visual correspondence and beyond", Computer Vision and Pattern Recognition Proceedings 2011 ;

RealtimeBFV - K. Zhang, J. Lu, and G. Lafruit, "Cross-based local stereo matching using orthogonal integral images," IEEE Trans. Circuits and Systems for Video Technology, vol. 19, no. 7, pp. 1073-1079, July 2009;

RealtimeBP - Q. Yang, L. Wang, R. Yang, S. Wang, M. Liao, and D. Nister, "Real-time global stereo matching using hierarchical belief propagation," BMVC 2006.

RealtimeGPU - L. Wang, M. Liao, M. Gong, R. Yang, and D. Nister, "High-quality real-time stereo using adaptive cost aggregation and dynamic programming." 3DPVT 2006.

DCBGrid - Christian Richardt et al, "Real-time spatiotemporal stereo matching using the dual-cross-bilateral grid", European conference on computer vision conference on Computer vision 2010. Results for Cost Filter and DCBgrid was obtained using source code from the author's websites. The websited of Richardet et al, provides synthetic stereo video sequences "Book", "Street", "Tanks", "Temple", and "Tunnel" for public use. These sequences were also used to evaluate performance of the experimental version of the invention. The numerical comparison reveal significantly better performance as shown in Table 2 below. Table 2

Nonideal sequences were also tested and evaluated quantitatively. The present system showed better performance for a compressed stereo movie trailer, stereo footage from a Davinci surgery system, and stereo video captured from everyday life.

While specific embodiments of the present invention have been shown and described, it should be understood that other modifications, substitutions and alternatives are apparent to one of ordinary skill in the art. Such modifications, substitutions and alternatives can be made without departing from the spirit and scope of the invention, which should be determined from the appended claims.

Various features of the invention are set forth in the appended claims.