Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR FUSING SENSED MEASUREMENTS
Document Type and Number:
WIPO Patent Application WO/2017/110836
Kind Code:
A1
Abstract:
A method and system for fusing sensed measurements includes a depth sensor to acquire depth measurements of a scene as a sequence of frames, and a camera configured to acquire intensity measurements of the scene as a sequence of images, wherein a resolution of the depth sensor is less than a resolution of the camera. A processor searches for similar patches in multiple temporally adjacent frames of the depth measurements, groups the similar patches into blocks using the intensity measurements, increases a resolution of the blocks using prior constraints to obtain increased resolution blocks, and then constructs a depth image with a resolution greater than the resolution of sensor from the increased resolution blocks.

Inventors:
KAMILOV ULUGBEK (US)
BOUFOUNOS PETROS T (US)
Application Number:
PCT/JP2016/088007
Publication Date:
June 29, 2017
Filing Date:
December 14, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
International Classes:
G06T3/40
Domestic Patent References:
WO2000077734A22000-12-21
Foreign References:
US20150172623A12015-06-18
US20130010067A12013-01-10
US20150023563A12015-01-22
Other References:
KAMILOV ULUGBEK S ET AL: "Depth superresolution using motion adaptive regularization", INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), IEEE, 11 July 2016 (2016-07-11), pages 1 - 6, XP032970814
PASCAL MAMASSIAN ET AL: "Interaction of visual prior constraints", VISION RESEARCH., vol. 41, no. 20, 1 September 2001 (2001-09-01), GB, pages 2653 - 2668, XP055355110
R. CHARTRAND: "Nonconvex splitting for regularized Low-Rank + Sparse decomposition", IEEE TRANS. SIGNAL PROCESS., vol. 60, no. 11, November 2012 (2012-11-01), pages 5810 - 5819
Attorney, Agent or Firm:
SOGA, Michiharu et al. (JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

A method for fusing sensed measurements, comprising steps:

acquiring depth measurements of a scene as a sequence of frames with a depth sensor;

acquiring intensity measurements of the scene as a sequence of images with a camera, wherein a resolution of the depth sensor is less than a resolution of the camera, and further comprising computerized steps;

searching for similar patches in multiple temporally adjacent frames of the depth measurements

grouping the similar patches into blocks using the intensity measurements; increasing a resolution of the blocks using prior constraints to obtain increased resolution blocks;

constructing a depth image with a resolution greater than the resolution of sensor from the increased resolution blocks; and

repeating the computerized steps until a termination condition is reached. [Claim 2]

The method of claim 1 , wherein the depth sensor and camera are calibrated. [Claim 3]

The method of claim 1, wherein the depth sensor is a light radar (LIDAR) sensor.

[Claim 4]

The method of claim 1, wherein the camera is a video camera.

[Claim 5]

The method of claim 1 , wherein the constructing minimizes a cost function that combines a quadratic data-fidelity term, and a regularizer that, controls an error between the depth measurements and depth image. [Claim 6]

The method of claim 5, wherein the regulizer penalizes a rank of the depth image.

[Claim 7]

The method of claim 1 , wherein the depth image is motion adaptive by combining measurements from multiple views of the scene.

[Claim 8]

The method of claim 1 , wherein the searching and grouping uses a search area centered at a reference patch, and by considering overlapping patches in each frame.

[Claim 9]

The method of claim 5, wherein the regulizer uses a v-shrinkage operator. [Claim 10]

The method of claim 5, wherein the cost function is minimized using a an augmented-Lagrangian method.

[Claim 11]

The method of claim 10, the augmented-Lagrangian uses an alternating direction method of multipliers.

[Claim 12]

A system for fusing sensed measurements, comprising:

a depth sensor configured to acquire depth measurements of a scene as a sequence of frames;

a camera configured to acquire intensity measurements of the scene as a sequence of images, wherein a resolution of the depth sensor is less than a resolution of the camera; and

a processor configured to search for similar patches in multiple temporally adjacent frames of the depth measurements, to group the similar patches into blocks using the intensity measurements, to increase a resolution of the blocks using prior constraints to obtain increased resolution blocks, to constructing a depth image with a resolution greater than the resolution of sensor from the increased resolution blocks.

Description:
[DESCRIPTION]

[Title of Invention]

METHOD AND SYSTEM FOR FUSING SENSED MEASUREMENTS

[Technical Field]

[0001]

This invention relates to sensing systems and methods, and more specifically to fusing sensed measurements output by low-resolution depth sensors and a high- resolution optical cameras.

[Background Art]

[0002]

One of the important challenges in computer vision applications is acquiring high resolution depth maps of scenes. A number of common tasks, such as object reconstruction, robotic navigation, and automotive driver assistance can be significantly improved by complementing intensity data from optical cameras with high resolution depth maps. However, with current sensor technology, direct acquisition of high-resolution depth maps is very expensive.

[0003]

The cost and limited availability of such sensors imposes significant constraints on the capabilities of computer vision systems and hinders the adoption of methods that rely on high-resolution depth maps. Thus, a number of methods provide numerical alternatives to increase the spatial resolution of the measured depth data.

[0004]

One of the most popular and widely used class of techniques for improving the spatial resolution of depth data is guided depth superresolution. Those techniques jointly acquire depth maps of the scene using a low-resolution depth sensor, and optical images using a high-resolution optical camera. The data acquired by the camera is subsequently used to superresolve a low-resolution depth map. Those techniques exploit the property that both modalities share common features, such as edges and joint texture changes. Thus, such features in the optical camera data provide information and guidance that significantly enhances the superresolved depth map.

[0005]

In the past, most of those methods operated on a single optical image and low-resolution depth map. However, for most practical uses of such methods and systems, one usually acquires a video with the optical camera and a sequence of snapshots of the depth maps.

[0006]

One approach models the co-occurence of edges in depth and intensity with Markov Random Fields (MRF). An alternative approach is based on joint bilaterial filtering, where intensity is used to set weights of a filter. The bilaterial filtering can be refined by incorporating local statistics of depths. In another approach, geodesic distances are used for determining the weights. That approach can be extended to dynamic sequences to compensate for different data rates in the depth and intensity sensors. A guided image filtering approach can further improve edge preservation.

[0007]

More recently, sparsity-promoting regularization, which is an essential component of compressive sensing, has provided more dramatic improvements in the quality of depth superresolution. For example, improvements have been demonstrated by combining dictionary learning and sparse coding methods.

Another method relies on weighted total generalized variation (TGV)

regularization for imposing a piecewise polynomial structure on depth.

[0008] The conventional MRF approach can be combined with an additional term promoting transform domain sparsity of the depth in an analysis form. One method uses the MRF model to jointly segment objects and recover a higher quality depth. Depth superresolution can be performed by taking several snapshots of a static scene from slightly displaced viewpoints and merging the measurements using sparsity of the weighted gradient of the depth.

[0009]

Many natural images contain repetitions of similar patterns and textures. State-the-art image denoising methods, such as nonlocal means (NLM), and block matching and 3D filtering (BM3D) take advantage of this redundancy by processing the image as a structured collection of patches. The formulation of NLM can be extended to more general inverse problems using specific NLM regularizers. Similarly, a variational approach can be used for general BM3D- based image reconstruction. In the context of guided depth superresolution, NLM has been used for reducing the amount of noise in the estimated depth. Another method combines a block-matching procedure with low-rank constraints for enhancing the resolution of a single depth image.

[Summary of Invention]

[0010]

The spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. The embodiments of the invention improve the resolution of depth images using higher resolution optical intensity images as side information. More specifically, the embodiments fuse sensor measurement output by low-resolution depth sensors and high-resolution optical cameras.

[0011]

Incorporating temporal information in videos can significantly improve the results. In particular, the embodiments improve depth resolution, exploiting space- time redundant features in the depth and intensity images using motion-adaptive low-rank regularization. Results confirm that the embodiments can substantially improve the quality of the estimated high-resolution depth image. This approach can be a first component in methods and systems using vision techniques that rely on high resolution depth information.

[0012]

A key insight of the invention is based on the realization that information about one particular frame of depth measurements is replicated, in some form, in temporally adjacent frames. Thus, frames across time can be exploited to superresolve the depth map. One challenge is determing this information in the presence of scene, camera, and object motion between multiple temporally adjacent frames.

[0013]

Another challenge in incorporating time into depth estimation is that depths can change significantly between frames. This results in abrupt variations in depth values along the temporal dimension and can lead to significant degradation in the quality of the result. Thus, it is important to compensate for motion during estimation.

[0014]

To that end, the embodiments exploits space-time similarities in the measurements using motion adaptive regularization. Specifically, the method searches for similar depth patches that are grouped into blocks, and the blocks are superresolved and regularized using as a prior constraint a rank penalty.

[Brief Description of the Drawings]

[0015]

[Fig. 1]

Fig. 1 is a block diagram of a method and system for fusing sensed measurements according to embodiments of the invention.

[Fig. 2]

Fig. 2 is a schematic of block searching and grouping within a space-time search area according to embodiments of the invention.

[Fig. 3]

Fig. 3 is of the v-shrinkage operator according to embodiments of the invention.

[Description of Embodiments]

[0016]

As shown in Fig. 1 , the embodiments of our invention fuse measurement output by low-resolution depth sensors and a high-resolution optical cameras to improve the resolution of a depth image.

[0017]

The embodiments construct a high-resolution depth image by minimizing a cost function that combines a data-fidelity term and a regularizer. Specifically, we impose a quadratic data fidelity term that controls the error between the measured and estimated depth values. The regularizer groups similar depth patches from multiple frames and penalizes the rank of the resulting depth image.

[0018]

Because we use patches from multiple frames, our method is implicitly motion adaptive. Thus, by effectively combining measurements from multiple views of the scene, depth estimates are improved.

[0019]

Method and System Overview

Depth measurements 120 are acquired of a scene 105 with a depth sensor 110, e.g., the depth sensor can be a light radar (LIDAR) sensor. In addition, intensity measurements 125 are acquired of the scene 105 with a camera 115, e.g., the camera is a high-resolution optical video camera. A resolution of the depth sensor is less than a resolution of the camera sensor. The sensor and the camera are calibrated.

[0020]

In computerized steps, the depth measurements are searched for similar patches, which are then grouped 130 into blocks using the intensity measurements. A search for the similar patches is performed in multiple temporally adjacent frames of the depth measurements. The details of the searching and grouping are detailed below with reference to Fig. 2.

[0021]

A resolution of the blocks is increased 140 by using prior constraints to produce increased resolution blocks. Then, the increased resolution blocks are used to construct a 145 a high-resolution image 150. That is, the resolution of the depth image is greater than the resolution of depth sensor. The computerized steps are repeated until a termination condition is reached, e.g., a predetermined number of times, convergence on the resolution, or the end of the measurements.

[0022]

Problem Formulation

Our method and system acquires the depth measurements { >t} t e[i τ] °f the scene. Each measurement is considered as a downsampled version of a higher resolution depth image t ε R N using a subsampling operator H t . Our end goal is to recover and construct the high-resolution depth image t for all t, where t are temporal indices to the frames of the depth measurements.

[0023]

In this description of the embodiments, we use N to denote the number of pixels in each frame, T to denote the number of temporal frames, and M to denote the total number of depth measurements. Furthermore, ψ G M denotes a vector of all the measurements, φ G NT denotes the complete sequence of high-resolution depth maps, and H G R MxWT denotes the complete subsampling operator.

[0024]

We also acquire the intensity measurements 125 as a sequence of high- resolution intensity images x G R NT using the camera.

[0025]

Using the depth measurements and intensity measurement, a forward model for the depth recovery problem is

ψ = Ηφ + β, (1)

where e G E denotes measurement noise. Thus, our objective is to recover and consrtruct the high-resolution depth image 150 given the measurements ψ and x, and the sampling operator H.

[0026]

We formulate the depth estimation task as an optimization problem φ = arg min fi || ψ - Ηφ ¾ + ¾ =1 Χ(Β ρ φ)}, (2) where 31(Β ρ φ) is a regularization

term that imposes prior constraints on the depth measurements.

[0027]

We form the regularization term by constructing sets of patches from each frame in the depth measurements. Specifically, we first define an operator B p , for each set of patches p G [1, ... , P], where P is the number of such sets constructed. The operator extracts L patches of size B pixels from the frames in the depth measurements φ.

[0028] Fig. 2 shows the block searching and grouping within a space-time search area 210. The figure shows frames 201, 202 and 202 respectively at times t-\, t and t+l. The frames include various features 205.

[0029]

The search area in the current frame t is centered at a reference patch 220. The search is conducted in identical window position in multiple temporally adjacent frames. Similar patches are grouped to construct a block β ρ = Β ρ φ. Each block β ρ = Β ρ φ G M. BxL is obtained by first selecting the reference patch, and then finding L— 1 similar patches within the current frame, as well as adjacent temporal frames.

[0030]

To determine similarity and to group similar patches, we use the intensity measurements in the optical image as a guide. To reduce the computational complexity of the search, we restrict the search to a space-time window of fixed size around the reference patch 220. We perform the same block searching and grouping for all space-time frames by moving the reference patch, and by considering overlapping patches in each frame. Thus, each pixel in the

measurements φ can contribute to multiple blocks.

[0031]

The adjoint B p of B p corresponds to replacing the patches in the block at their original locations in the depth measurents φ. The adjoint satisfies the following property

∑p=i B Bp = R# (3)

where R = diagfa, ... , r N ) E , NTxNT and r n denotes the total number of

references to the n th pixel by the matrices {B p } p=1 P . Therefore, the depth measurements φ can be expressed in terms of an overcomplete representation using the blocks

φ = R -1 ∑p =1 ΒρΒ ρ φ. (4)

[0032]

Rank Regularization

Each block, represented as a matrix, contains multiple similar patches, i.e., similar columns. Thus, we expect the matrix to have a low rank, making rank a natural regularizer for the problem

Κ(β) = rankQS) . β e R BXL ). (5)

[0033]

By seeking a low-rank solution to (2), we exploit the similarity of corresponding blocks to guide superresolution while enforcing consistency with the sensed intensity measurements. However, the rank regularizer (5) is of little practical interest because its direct optimization is intractable. One approach around this is to convexify the rank by replacing it with the nuclear norm:

AO?) = λ II ? II» = λ ∑™" (/ ) σ κ {β), (6)

where σ ¾ ·( ?) denotes the k th largest singular value of β, and λ > 0 is a parameter controlling the amount of regularization.

[0034]

In addition to its convexity, the nuclear norm is an appealing penalty to optimize because the nuclear norm has a closed form proximal operator: prox A|MU (V) * arg min \- \\ ip - β \\ F 2 + λ \\ β \l

= ιι η λ (σ(ψ)) v T , (7)

where ψ = av T is a singular value decomposition (SVD) of ip, and η is a soft- thresholding function applied to the diagonal matrix σ.

[0035] It is known that nonconvex regularizes consistently outperform nuclear norm by providing better denoising capability without losing important signal compoenents. We use the following nonconvex generalization to the nuclear norm

W = λς λ>ν (β λ ∑^" (B ' L) 9 λ {β ), see e. g., (8) R. Chartrand, "Nonconvex splitting for regularized Low-Rank + Sparse

decomposition. IEEE Trans. Signal Process., 60(11):5810— 5819, November 2012.

[0036]

The scalar function g^ v is satisfies

mm \x - y | 2 + Xg Xy ( )} = h (x), (9) where Ηχ ν is the v-Huber function

with δ (l/v - l/2)A v / (2"v) .

[0037]

Although Qx is nonconvex and has no closed form formula, its proximal operator does admit a closed form ex ression

where Τ λ ν is a pointwise v-shrinkage operator defined as

T XtV {x) A max(0, \x\ - χΓ 1 ) ^. (12)

[0038]

Fig. 3 is a graph of the v-shrinkage operator Τ ν λ for a fixed λ = 1 at two values of v. For v = 1, the v-shrinkage (12) is equivalent to conventional soft thresholding. When v→ 0, the operator approaches hard thresholding, which is similar to principal component analysis (PCA) in the sense that operator only retains the significant principal components.

[0039]

Thus, the regularizer (8) is a computationally tractable alternative to the rank penalty. While the regularizer is not convex, it can still be efficiently optimized due to closed form of its proximal operator. Note that due to nonconvexity of our regularizer for η < 1, it is difficult to theoretically guarantee global convergence. However, we have empirically observed that our methods converge reliably over a broad spectrum of examples.

[0040]

Iterative Optimization

To solve the optimization problem (2) using the rank regularizer (8), we simplify our notation by defining an operator B = (B lt ... , Bp), and a vector β = Β< = (βι AO-

[0041]

The minimization is performed using an augmented-Lagrangian (AL) method. Specifically, we seek the critical points of the following AL

+ || /? - B0 ¾ + s r (/? - B0)

where p > 0 is the quadratic parameter, and s is the dual variable that imposes the constraint β = Βφ.

[0042]

Conventionally, the AL method solves (2) by alternating between a joint minimization step and an update step as

argmin s* "1 )} (15)

<p NT eR PxB * L

[0043]

However, the joint minimization in (15) is typically computationally intensive. To reduce the complexity, we separate (15) into a succession of simpler steps using the alternating direction method of multipliers (ADMM)

[0044]

The steps are as follows

φ κ t-argminiXO^ ?*- 1 ^- 1 )}, (17)

<pem NT

β κ <- argmin /^s* "1 )}, and (18)

/?era PxBxL

s f c <- s* "1 + p(/? fe - B fc ). (19)

[0045]

By ignoring the terms that do not depend on the depth measurements φ, (17) amounts to solving a quadratic equation

0* «- arg min \ \\ ψ - Ηφ ||5 + P - || Βφ - z* "1 111

where z fc_1 = ? fc-1 -I- s k~1 /p. Solving this quadratic equation is efficient because the inversion is performed on a diagonal matrix. Similarly, (18) is solved by

β" «- arg min if || β - y k \\ 2 +∑ p p=1 (21) e R PxBxL 12 J

withy fc ^ B fc -s fc_1 /p.

[0046] This step can be solved via block- wise application of a proximal operator as ?* - prox (A/p) v (B P 0 FC - SJ ' V ). (22)

for all p G [1, ... , ?].

[0047]

Simplified Method

The above iterative optimization method can be significantly simplified by decoupling the enforcement of the data-fidelity from the enforcement of the rank- based regularization. The simplified method reduces the computational complexity while making estimation more uniform across the entire space-time depth measurements.

[0048]

Due to inhomogeneous distribution of pixel references generated by matching across the image, using a penalty with a single regularization parameter highly penalizes pixels with a large number of references. The resulting

nonuniform regularization makes the method potentially oversensitive to the choice of the parameter λ. Instead, we rely on the simplified method

β* <- arg min i || β ρ - B^ "1 Ιΐ|+ #(/? ρ )] (23) F C <- arg min f || ψ - Ηφ |¾ + - p II φ - $ k \\ 2 A (24) where 0 FE = R "1 B r ^ fc , and λ > 0 is the regularization and p > 0 is the quadratic parameters.

[0049]

To solve (23) we apply the proximal operator

for all p G [1, ... , P]. Next, (24) reduces to a linear step

κ <- (H r H + pi 1 ^ 7 ^ + p< k ). (26) [0050]

There are substantial similarities between iterative optimization and simplified methods. The main differences are that we eliminated the dual variable s, and simplified the quadratic subproblem (20).

[0051]

Effect of the Invention

The embodiments provide a novel motion-adaptive method and system for constructing superresolution of depth images. The method searches for similar patches from several frames, which are grouped into blocks that are then

supperresolved using a rank regularizer. Using this approach, we can produce a high-resolution depth images from low-resolution depth measurements. Compared to the conventional techniques, the method preserves temporal edges in the solution and effectively mitigates noise in practical configurations.

[0052]

While our method has a higher computational complexity than conventional approaches that process each frame individually, it allows us to incorporate a very effective regularization for stabilizing the inverse problem associated with superresolution. The method enables efficient computation and straightforward implementation by reducing the problem to a succession of straightforward operations. Results demonstrate the considerable benefits of incorporating time and motion adaptivity into inverse-problems for depth estimation.

[0053]

Key contributes include providing a novel formulation for guided depth superresolution incorporating temporal information. In this formulation, the high resolution depth is determined by solving an inverse problem that minimizes a cost. This cost includes a quadratic data- fidelity term, as well as a motion adaptive regularizer based on a low-rank penalty on groups of similar patches. [0054]

Two optimization strategies are described for solving our estimation problem. The first approach is based on an exact optimization of the cost via alternating direction method of multipliers (ADMM). The second approach uses a simplified procedure that alternates between enforcing data-consistency and the low-rank penalty.