Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF REAL-TIME GENERATION OF 3D IMAGING
Document Type and Number:
WIPO Patent Application WO/2021/074298
Kind Code:
A1
Abstract:
The present invention relates to a method of real-time generation of a 3D geometry of an object. The method comprises the steps of calibrating at least one RGB camera pair arranged to provide images of the object, receiving input images of the object from the at least on RGB camera pair, and performing a stereo reconstruction in a first hierarchical level using the input images. The stereo reconstruction step further comprises the steps of a) performing a ray marching operation in a first resolution on the input images to determine geometry positions along each view ray of the images; b) applying a uniqueness criterion to the geometry positions; c) determining a normal for each geometry position; d) performing a regularization operation based on the geometry positions and the respective normal, providing updated geometry positions; and e) performing an interpolation operation on the updated geometry positions and respective normal. The method further comprises repeating steps a) and c)-e) in at least one iteration in at least one ascending hierarchical level, wherein the resolution in the ray marching operation is doubled for each iteration, resulting in a geometry buffer for each of the at least one camera pair. The present invention further relates to a 3D image generation device and a system comprising such device.

Inventors:
ASSARSSON ULF (SE)
RASMUSON SVERKER (SE)
Application Number:
PCT/EP2020/079048
Publication Date:
April 22, 2021
Filing Date:
October 15, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DIMENSION STREAM LABS AB (SE)
International Classes:
G06T7/55; H04N13/111; H04N13/117; H04N13/275; H04N21/218
Foreign References:
US20020126900A12002-09-12
Other References:
STEPHEN LOMBARDI ET AL: "Neural Volumes: Learning Dynamic Renderable Volumes from Images", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 18 June 2019 (2019-06-18), XP081383263, DOI: 10.1145/3306346.3323020
ZHENG-NING LIU ET AL: "High-quality Textured 3D Shape Reconstruction with Cascaded Fully Convolutional Networks", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS., vol. 14, 1 August 2019 (2019-08-01), US, pages 1 - 1, XP055747617, ISSN: 1077-2626, DOI: 10.1109/TVCG.2019.2937300
ALVARO COLLET ET AL: "High-quality streamable free-viewpoint video", ACM TRANSACTIONS ON GRAPHICS, vol. 34, no. 4, 27 July 2015 (2015-07-27), US, pages 69:1 - 69:13, XP055424381, ISSN: 0730-0301, DOI: 10.1145/2766945
RICHARD A. NEWCOMBE: "KinectFusion: Real-Time Dense Surface Mapping and Tracking", 1 January 2011 (2011-01-01), XP055507122, Retrieved from the Internet [retrieved on 20180914]
MICHAEL ZOLLHÖFER ET AL: "State of the Art on 3D Reconstruction with RGB-D Cameras", COMPUTER GRAPHICS FORUM, vol. 37, no. 2, 1 May 2018 (2018-05-01), GB, pages 625 - 652, XP055645281, ISSN: 0167-7055, DOI: 10.1111/cgf.13386
Attorney, Agent or Firm:
AWA SWEDEN AB (SE)
Download PDF:
Claims:
CLAIMS

1. Method (100) of real-time generation of a 3D geometry of an object, the method comprising the steps of: calibrating (20) at least one RGB camera pair arranged to provide images of the object; receiving (10) at least one pair input images of the object from the at least one RGB camera pair; performing a stereo reconstruction (50) using the pair of input images comprising the steps of: a) performing a ray marching operation (52) in a first hierarchical level with a first resolution on the at least one pair of input images to determine geometry positions along each view ray of the images, wherein the first resolution is based on a target geometric resolution for a highest hierarchical level; b) applying a uniqueness criterion (54) to the geometry positions and discard geometry positions not fulfilling the uniqueness criterion; c) determining a normal for each geometry position; d) performing a regularization operation (56) based on the geometry positions and the respective normal by intersecting each view ray with planes constructed from neighboring geometry positions and normals, and, providing updated geometry positions as edge-preserving weighted averages of said intersections with respective updated normal; e) performing an interpolation operation (58) on the updated geometry positions and optionally the respective normal to increase the resolution in each dimension, by first a horizontal interpolation of the geometry positions followed by a vertical interpolation of the geometry positions; wherein the method further comprises repeating steps a) and c)-e) in at least one iteration (60) in at least one ascending hierarchical level of the ray marching operation, wherein the resolution in the ascending hierarchical level of the ray marching operation (50) is increased in each dimension by the interpolation operation for each iteration until the target geometric resolution for the highest hierarchical level is reached, resulting in a geometry buffer for each of the at least one camera pair, wherein the method further comprises, after the final iteration, a step of performing a mesh alignment (80) comprising the steps of: f) performing a triangulation of the position buffers and normals into meshes (70); g) perform a non-rigid alignment to merge the meshes; h) creating a distance map comprising screen-space coordinates of the object silhouette closest to that particular screen-space location for each projected pixel in the mesh; i) rendering, for each mesh of a camera of a camera pair, all other meshes from the camera’s view to get a depth map; j) perform, for each position in the mesh of the camera, a screen-space ray marching operation in the normal direction in the depth map, to determine if there are any intersections with another mesh, and if an intersection is determined a vector in direction of the intersection, or average of intersections if there are several, is stored in an output texture; k) averaging the vectors for each mesh with their neighboring vectors and wherein the averaged vectors are applied to each corresponding position in the mesh.

2. The method according to claim 1 , wherein the method prior to the step of performing a stereo reconstruction further comprises a step of creating a visual hull (40), and wherein the stereo reconstruction (50) is based on the visual hull. 3. The method according to claim 2, wherein the step of creating a visual hull (40) comprises a step of determining intersections of silhouette cones from a camera towards the object (2) by rendering every other camera’s silhouette cone, using the depth values of these rendered silhouette cones to create a depth map after repeating the step for every camera in the at least one camera pair, wherein the depth map is used as starting position for the stereo reconstruction (50).

4. The method according to any of the preceding claims, wherein the step of performing an interpolation operation (58) comprises a step of performing a horizontally aligned interpolation of the positions and normals followed by a vertically aligned interpolation of the positions and normals.

5. The method according to any of the preceding claims, wherein the resolution is doubled for each iterated ascending hierarchical level, and the step of interpolation comprises a doubling of the resolution in each dimension.

6. The method according to any of the preceding claims, wherein the length of the vectors stored in the output texture is weighted with the distance to the closest silhouette point determined from the distance map.

7. The method according to any of the preceding claims, wherein the method further comprises a step of

L) updating the normals based on the aligned mesh geometry.

8. The method according to claim 7, wherein the method further comprises a step of m) repeat steps f)-l) in at least one iteration for all meshes until a desired level of alignment is reached.

9. The method according to any of the preceding claims, further comprising a step of determining the surface to which positions in the buffer belong when meshes are overlapping comprising rendering a closest front facing surface and a closest back-facing surface, sorting view-samples according to depth, determining whether a back-facing sample is between two front-facing samples, and if so discard subsequent samples.

10. The method according to any of the preceding claims, wherein the method further comprises a step of: performing a color blending operation (90) in which colors are sampled from said one or more camera pair and weighted together.

11. The method according to claim 10, wherein the weighting of the sampled colors uses a linear combination of two weights for each sample.

12. The method according to claim 11 , wherein the color C of each color sample is determined by calculating wherein f is a parameter between 0 and 1 , cn is each color sample, and w^h and w ,h are the weights for each sample n, wherein the first weight ujd,n is sampled directly from the distance map and the second weight w ,h is provided by wherein vo,n is the view space position of the sample n, and vc,n is the corresponding camera position.

13. The method according to any of the preceding claims, wherein the step of calibrating the at least one camera pair comprises a step of performing a structure from motion, SfM, process using the object.

14. A 3D image generation device (5) comprising a receiving unit and a processing unit, wherein the device is configured to perform the method according to any of the preceding claims. 15. A system for generating a 3D image reconstruction comprising at least one RGB camera pair (3) and a 3D image generation device (5) according to claim 14.

Description:
METHOD OF REAL-TIME GENERATION OF 3D IMAGING

Technical Field

The present disclosure relates to 3D imaging generation, and especially a method for real-time generation of 3D imaging.

Background

Free-viewpoint video and performance capture by Collet et al. in “High- quality streamable free-viewpoint video”; ACM Trans. Graph. 34, (July 2015) implement a full end-to-end free-viewpoint video pipeline which achieves high-quality reconstruction for a number of scenes. They combine RGB, active stereo in IR and silhouette information with Poisson Surface Reconstruction to compute high-quality meshes for each frame and track mesh deformations to handle topology changes. Their method requires a high number of cameras and a large studio of specialized equipment. Their method is also computationally expensive; processing one frame on a machine with a dual 12-core processor and an AMD Radeon R9200 GPU takes them 28.2 minutes.

There are other real-time implementations capable of handling dynamic content satisfactory. A template-based approach achieves high- quality reconstruction of deformable models in real time using a custom-built RGBD camera. While compelling, such method uses only a single view and requires a template first to be acquired for each scene. Such template model is fixed and cannot handle changing scene topologies.

Another strategy is to extend Kinect Fusion, a popular method for scanning static geometry, which fuses depth maps from a single Kinetic sensor at a high frame rate into a volumetric signed distance field. This method handles dynamic scenes by using non-rigid volumetric fusion, where new depth frames are warped to a reference model in a non-rigid manner. This approach does not need an explicit template but still has problems with changing scene topologies, since the method relies on a reference surface model captured at a single point in time. Fusion4D is a method that combines a volumetric approach with estimation of a non-rigid motion field between frames, as well as using an active stereo approach for acquiring depth information. This approach is state of the art in terms of quality and handles multiple views in real time. It also handles foreground/background segmentation without a green screen. However, it is still computationally expensive and uses dedicated machines with 3.4GHz Intel Core i7 CPUs and two NVIDIA Titan X GPUs to generate each pair of RGBD stream.

Following and extending this approach, the method of Holoportation creates a room-sized reconstruction that allows for an immersive telepresence system. This system uses a similar approach as Fusion4D but extends it to an interactive system, allowing users to communicate with each other in real-time using AR or VR headsets. Again, this system is computationally expensive, requiring dedicated machines for each depth and color streams.

Further, there exists a vast body of research dedicated to facial reconstruction. One compelling method uses depth cameras for real-time reconstruction, computing a dense correspondence field between the input image and a generic face model. More recent methods use deep-learning- based approaches from single images. Some of these approaches also achieve real-time speeds. All of them, however, require strong priors and explicit or learned models of faces. One method without such priors uses passive stereo and achieve convincing results using an iterative stereo refinement approach. This method is however far from real time, taking about 20 minutes per frame in its implementation.

Real-time 3D reconstruction has gained momentum since the advent of cheap RGBD cameras such as the Kinect. These devices are often compelling because they can produce depth maps at a high frame rate while being low cost commodity devices. One common approach that these devices use is structured light, where a known pattern is projected onto a scene, and depth values are computed by analyzing how this pattern is deformed when striking surfaces. While giving accurate results, this method cannot be used in a multi-view setup, since the projected patterns interfere across devices. Another common strategy is using time-of-flight based methods for depth estimation. These cameras measure the time-of-flight of an emitted light pulse for each pixel of the camera, which can then be used to infer the depth since the speed of light is known. However, multi-path interference provides inherited problems.

Passive stereo methods use two RGB images to compute depth maps via triangulation. Popular methods that achieve high quality include PatchMatch Stereo, which has also been extended to real-time. One hierarchical stereo matching approach achieves high quality face reconstruction using an iterative refinement scheme. Another approach, using an iterative refinement method implemented using CUDA, achieves real-time performance, albeit with striping artifacts.

While this is a well-studied problem and there exists many compelling approaches, there remains a need for a method that can be used in a simple system and that provides even lower computational complexity than current methods, while retaining an acceptable quality.

Summary

It is an object of the present invention to provide an improved solution that alleviates the mentioned drawbacks with present devices. Furthermore, it is an object to provide a method that can be used for real-time generation of 3D imaging that fast and computational efficient while providing acceptable image quality.

The invention is defined by the appended independent claims, with embodiments being set forth in the appended dependent claims, in the following description and in the drawings.

According to a first aspect of the present invention, a method of real time generation of a 3D geometry of an object is provided. The method comprises the steps of calibrating at least one RGB camera pair arranged to provide images of the object, receiving at least one pair of input images of the object from the at least on RGB camera pair, and performing a stereo reconstruction using the input images. The stereo reconstruction step further comprises the steps of a) performing a ray marching operation in a first hierarchical level with a first resolution on the at least one pair of input images to determine geometry positions along each view ray of the images, wherein the first resolution is based on a target geometric resolution for a highest hierarchical level; b) applying a uniqueness criterion to the geometry positions and discard geometry positions not fulfilling the uniqueness criterion; c) determining a normal for each geometry position; d) performing a regularization operation based on the geometry positions and the respective normal by intersecting each view ray with planes constructed from neighboring geometry positions and normals, and, providing updated geometry positions as edge-preserving weighted averages of said intersections with respective updated normal; and e) performing an interpolation operation on the updated geometry positions and optionally the respective normal to increase the resolution in each dimension, by first a horizontal interpolation of the geometry positions followed by a vertical interpolation of the geometry positions. The method further comprises repeating steps a) and c)-e) in at least one iteration in at least one ascending hierarchical level of the ray marching operation, wherein the resolution in the ascending hierarchical level of the ray marching operation is increased, for instance doubled, for each iteration, resulting in a geometry buffer for each of the at least one camera pair. The method further comprises, after the final iteration, a step of performing a mesh alignment comprising the steps of f) performing a triangulation of the position buffers and normals into meshes; g) perform a non-rigid alignment to merge the meshes; h) creating a distance map comprising screen-space coordinates of the object silhouette closest to that particular screen-space location for each projected pixel in the mesh; i) rendering, for each mesh of a camera of a camera pair, all other meshes from the camera’s view to get a depth map; j) perform, for each position in the mesh of the camera, a screen-space ray marching operation in the normal direction in the depth map, to determine if there are any intersections with another mesh, and if an intersection is determined a vector in direction of the intersection, or average of intersections if there are several, is stored in an output texture; and k) averaging the vectors for each mesh with their neighboring vectors and wherein the averaged vectors are applied to each corresponding position in the mesh.

By providing the above described method, a method of generating a real time 3D imaging of an object is achieved, which can be used in a real time application. The method provides a computational fast and efficient process with kept plausible imaging quality. Further, the method provides that no specific lighting requirements necessary to achieve an acceptable result. Another advantage of the method is that no templates are needed for the creation and recording of the model and rendering of the avatar.

The camera pair comprises two cameras. They are associated to each other as a pair. The cameras in a camera pair may be positioned next to each other, or with a distance to each other. The cameras may be standard RGB webcams, and the setup may be simple without requirements of complicated arrangements of a large number of cameras. In further embodiments of the invention, two, three or four camera pairs used in the method. In a yet further embodiment, between one and eight camera pairs are use.

The uniqueness criterion may for instance be based on energy minimization, expected maximization of similarity, pixel matching, outlier filtering, or the like.

The method may provide a geometry buffer that may generate a view of the scene of the object from a perspective of a virtual camera that a user may control. Following the advantages of the present invention, such control may be performed in real time.

The interpolation operation may use the respective normal of the geometry positions in cases where an interpolated point only has one one valid neighbor. Then the intersection of the view ray with a plane constructed from the neighbor using the normal may be used. If there are two valid neighbors, the new point may be the average of these two points projected on the current view ray. In one embodiment, the ray marching operation may use the pair of input images from the camera pair, i.e. one image from each camera. Each image may have an image resolution. The exact image resolution may not be important as long as the images has enough image resolution for the geometric resolution that is desired to achieve. If a geometric resolution of MxN is desired at a certain hierarchical level, then MxN pixels in a first of the images are selected, possibly evenly distributed over the image.

For each selected pixel in the first image a corresponding pixel in the other, second, image of the pair of images may be determined. A corresponding pixel may mean that the two pixels picture about the same point for the same object. In the ray marching, a stepwise marching along a ray from the first camera’s position in 3D space through the pixel in the first image into the virtual 3D scene. For each step one is at a certain 3D position, p, along the ray. This position p may then be projected on the second image and the pixels from the first and the second images are compared whether the same point that is intended to be 3D reconstructed is pictured. Also pixels around the identified pixels may be used in the comparison. The comparison may result in a score for how good the matching between the two pixels is. Then the marching along the ray may be continued. When the marching along the entire ray is completed, a position p at which the best score may be returned.

In one embodiment, the applying of a uniqueness criterion may comprise that a similar ray marching operation is performed again, but with the first and the second image being switched, such that ray marching is starting from a pixel in the second image and compared to corresponding pixels in the first image. Further, the result from this second operation may be compared to the result of the first ray marching operation whether results for the positions are the same. In one embodiment, the ray marching operation in the uniqueness criterion may be performed based on the pixels that in the first ray marching operation where identified to have a corresponding pixel in the second image. According to one embodiment, the method may further comprise, prior to the step of performing a stereo reconstruction, a step of creating a visual hull, and wherein the stereo reconstruction may be based on the visual hull. By performing the stereo reconstruction based on a visual hull, the time used for the iterations in the stereo reconstruction may be reduced. The time for an iteration in the stereo reconstruction based on a visual hull may for instance be half the time as without using a visual hull. In an embodiment wherein a visual hull is not used, the step of ray marching in the first iteration may be extended instead.

According to one embodiment, the step of creating a visual hull may comprise a step of determining intersections of silhouette cones from a camera towards the object by rendering every other camera’s silhouette cone, using the depth values of these rendered silhouette cones to create a depth map after repeating the step for every camera in the at least one camera pair, wherein the depth map may be used as starting position for the stereo reconstruction.

According to one embodiment, the step of performing an interpolation operation may comprise a step of performing a horizontally aligned interpolation of the positions and normals followed by a vertically aligned interpolation of the positions and normals.

According to one embodiment, the method may further comprise, after the final iteration, a step of performing a mesh alignment comprising the steps of f) performing a triangulation of the position buffers and normals into meshes; g) perform a non-rigid alignment to merge the meshes; h) creating a distance map comprising screen-space coordinates of the object silhouette closest to that particular screen-space location for each projected pixel in the mesh; i) rendering, for each mesh of a camera of a camera pair, all other meshes from the camera’s view to get a depth map; j) perform, for each position in the mesh of the camera, a screen-space ray marching operation in the normal direction in the depth map, to determine if there are any intersections with another mesh, and if an intersection is determined a vector in direction of the intersection, or average of intersections if there are several, is stored in an output texture; and k) averaging the vectors for each mesh with their neighboring vectors and wherein the averaged vectors are applied to each corresponding position in the mesh.

According to one embodiment, the length of the vectors stored in the output texture may be weighted with the distance to the closest silhouette point determined from the distance map.

According to one embodiment, the method may further comprise a step of I) updating the normals based on the aligned mesh geometry.

According to one embodiment, the method may further comprise a step of m) repeat steps f)-l) in at least one iteration for all meshes until a desired level of alignment is reached.

When performing either embodiment of mesh alignment, the result may be a mesh that is aligned to other meshes at overlapping locations. By repeating the steps, the level of alignment may be higher for each iteration, and the iterations may be performed until a desired level of alignment is reached.

According to one embodiment, the method may further comprise a step of determining the surface to which positions in the buffer belong when meshes are overlapping comprising rendering a closest front-facing surface and a closest back-facing surface, sorting view-samples according to depth, determining whether a back-facing sample is between two front-facing samples, and if so discard subsequent samples.

According to one embodiment, the method may further comprise a step of n) performing a color blending operation in which colors may be sampled from said one or more camera pair and weighted together.

According to one embodiment, the weighting of the sampled colors may use a linear combination of two weights for each sample.

According to one embodiment, the color C of each color sample may be determined by calculating wherein f is a parameter between 0 and 1 , c n is each color sample, and 6 O d,n and w ,h are the weights for each sample n, wherein the first weight uj d,n is sampled directly from the distance map and the second weight w ,h is provided by wherein vo , n is the view space position of the sample n, and v c,n is the corresponding camera position.

Any of the color blending embodiments above provide a computationally effective way of sampling the colors from the camera pair or pairs to the generated view of the object. In this way, the generation of geometry buffers may be performed efficiency without dealing with colors until when generating the final view.

According to one embodiment, the step of calibrating the at least one camera pair may comprise a step of performing a structure from motion, SfM, process using the object.

According to a second aspect of the present invention, a 3D image generation device is provided, comprising a receiving unit and a processing unit, wherein the device is configured to perform the method according to any of the embodiments above.

According to a third aspect of the present invention, a system for generating a 3D image reconstruction is provided, comprising at least one RGB camera pair and a 3D image generation device according to the embodiment above. Brief Description of the Drawings

The invention will in the following be described in more detail with reference to the enclosed drawings, wherein:

Fig. 1 shows an overview flow chart of a pipeline according to an embodiment of the present invention; Fig. 2 shows an illustration of a camera’s silhouette cone is constructed with tringles that extend from the camera center to the silhouette of the segmented foreground;

Fig. 3 shows the intersection of the segmented foreground surfaces providing a visual hull that creates a bounding volume for where the geometry resides;

Fig. 4 shows an illustration of reprojected images in a rectified image pair according to prior known methods;

Fig. 5 shows an illustration of a rectification using surface-oriented matching windows according prior known methods;

Fig. 6 shows a step of regularization according to an embodiment of the present invention, looking at the intersection of the current view-ray with planes constructed from neighboring positions and normals, used for computation of a regularized position as an edge-preserving weighting of the ray/plane intersections;

Fig. 7 shows an exemplary camera pair setup according to an embodiment of the present invention;

Fig. 8 shows a table of parameters used for a method according to an embodiment of the present invention;

Figs. 9a-d show performance illustrations of methods according to embodiments of the present invention; and

Fig. 10 show images from different steps of a method according to an embodiment of the present invention.

Description of Embodiments

The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, like numbers refer to like elements. In the embodiment presented in the following, eight webcams are used that are grouped into stereo pairs and placed in an arc around the model. An advantage with the invention is that relatively simple cameras can be used, such as Logitech C922 webcams. A green screen and two diffuse light sources can be used to help background segmentation and uniform lighting. The cameras are calibrated 20 with a semi-automatic method and corrected for scale. Different methods are available for such calibration.

The cameras are connected to a computer comprising a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU).

The method according to an embodiment of the present invention is illustrated in Fig. 1 as a real-time pipeline consisting of multiple stages that start with image capture and end with a final view rendered from the current position of a user-controlled virtual camera. All major steps of the pipeline are computed on the GPU.

The method starts by a capture 10 of the scene and preprocess 30 the images by undistorting from the camera's distortion parameters, converting to grayscale, and applying background segmentation. A visual hull 40 of the geometry is computed (optional) to constrain the search space for the stereo matching steps 50.

The geometry is computed per camera pair 3 (see fig. 3), using an optimized view-space stereo-matching method. The method is composed of multiple steps, and is applied hierarchically in a coarse-to-fine measure. Early in the method, geometry normals are estimated, which are used to orient the matching filters, as well as for regularization 56 and interpolation 58. Constraints are applied to discard invalid parts of the geometry.

Position and normal buffers from the stereo reconstruction 50 are used to create one mesh per camera pair. To minimize gaps in the overlap between these meshes, they are processed with a screen-space non-rigid alignment method that deforms the meshes relative to the local quality of the reconstruction using a distance-to-mesh-silhouette weighting. Finally, these meshes are rendered from a current virtual camera position. In this camera's screen-space it is resolved which meshes are visible for each pixel, and a final color using blending can be computed. This blending function uses two weights; one that represents the angle between the virtual camera and the recording cameras, and one that is sampled from a distance-to-mesh- silhouette map computed in screen-space for the virtual camera.

Calibration

In one embodiment, system may be calibrated using a standard structure from motion (SfM) pipeline. A simple pinhole camera model may be used with one coefficient of radial distortion. Before capture, the intended subject can be used as calibration target. This has two advantages; it optimizes the calibration for the specific scene in question, and calibration can be done just before capturing is started. This minimizes the risk of accidentally moving the cameras or otherwise disrupting the scene before recording. To this calibration, an extra step may be added to also compute the scale of the scene, which may simplify the reasoning about its geometrical quantities. This is done by using a small board with two black circles, spaced at a known distance. The black circles are detected in two chosen cameras in image space using OpenCV, and the world space position of each circle is obtained using triangulation with the obtained calibration from the SfM. The computed distance between the circles can then be compared to the known distance to produce a scale factor.

Capturing And Preprocessing

In one embodiment, each video feed from the cameras is streamed in Full HD resolution at 30Hz to the computer. FFmpeg can be used to capture the video streams directly into the pipeline, or to replay pre-recorded video streams. These FFmpeg instances run on separate CPU threads decoding video in the background. Two different synchronization schemes are used depending on the source. For live video, it is ensured that the incoming streams use small buffers and that the pipeline is emptied as fast as possible (possibly by dropping frames), so that always the latest incoming frames get into our pipeline. For recorded streams, the videos may be synchronized by using the embedded presentation time stamps for each frame. This synchronization will only be approximate, since the cameras does not have a global synchronization mechanism.

The cameras could be configured to have automatic exposure disabled, to ensure comparable colors across cameras, and auto-focus disabled so as not to disrupt the calibration. For simplicity and higher quality, the green screen may be employed to help with background/foreground segmentation. Two diffuse light sources may be used to get reasonably uniform lighting. The person in capture imaging may also use a green shirt to provide a precise segmentation of the neck region.

At the start of each frame, the captured images are immediately uploaded to GPU memory and all subsequent computation is performed on the GPU. This leaves the CPU free to decode the video streams and perform application logic. At the cost of a single frame of latency, the next frame could be uploaded asynchronously while processing the current. The first step in the image preprocessing is undistortion of the captured images from the distortion parameters captured in the calibration step. Next, a standard chroma-key algorithm may be used to get a high-quality segmentation of the object 2. Finally, the images are converted to grayscale and a mipmap hierarchy is computed.

The volume of where the potential geometry resides may be significantly restricted by utilizing a foreground segmentation and a wide- angle setup, as illustrated in fig. 2. For each, camera, a surface is created, which consists of triangles with one vertex in the camera center and two vertices along the silhouette (see fig. 2). The intersection of these silhouette cones, the shaded area of the object in fig. 3, creates a boundary for the geometry which is commonly called the visual hull. To compute this intersection efficiently for a specific camera, every other camera's silhouette cone is rendered from it in turn, storing only the depth value. As each of the other cameras is rendered in this manner, the new values are blended into the frame buffer, for each pixel retaining the smallest value (e.g. furthest away from the camera in an OpenGL coordinate system), effectively tightening the boundary around the geometry. This is then repeated for all cameras. This computed depth map can then be used as a starting position for the following stereo reconstruction, which both decreases the number of matches that need to be performed, as well as the risk of outliers in the stereo matching.

Stereo Reconstruction

The conventional method of doing local stereo matching is to use epipolar geometry and compute the matching over an L/ /V pixel window in a rectified image pair, i.e. two images where the image planes are parallel. This is done both as a simplification to the matching problem and as an optimization, since a view ray from the first image projects to a horizontal line in the second image. This greatly increases performance in CPU implementations since it leads to a much more cache-friendly data-access pattern. However, there are two major drawbacks with using image rectification; the first is the inevitable distortion of the rectified image under reprojection, the other is the implicit use of fronto-parallel matching windows (see figs. 4 and 5). Popular techniques such as PatchMatch Stereo have shown that there can be significant quality gains from using oriented matching windows. In the present invention, normals are estimated that can be used to orient matching windows, as well as for geometry-aware interpolation and smoothing (see further below), and rectification can thereby be excluded completely. This does not affect performance significantly in our GPU centric implementation, since GPUs rely much more on latency hiding than caches and since the GPU's texture cache is optimized for spatial locality.

In an embodiment of the present invention, for each camera pair, a hierarchical local stereo-matching method is used. However, to be able to easily estimate and utilize normal information, stereo matching is done by raymarching view rays from one camera in each pair and storing the results as view-space geometry buffers. The matching is divided both into multiple steps and in a hierarchical fashion, where the resolution is doubled in each dimension for each level of the hierarchy (see fig. 1 ). This resolution is a target geometric resolution, which is equal to the resolution for the highest level in the hierarchy. Below, it is started by describing each step for the first hierarchical level, followed by a description of how higher hierarchical levels are handled.

Raymarching

In the first ray-marching step, a resolution N times lower than the defined geometric resolution is started with. I.e., a view ray is followed for each 2 N th vertical and 2 N Vn horizontal pixel for one of the cameras in each pair. Marching is performed along the ray for a fixed length starting from the conservative estimate of the geometry constructed as described above. For each step along the ray, a rectangular filter is constructed in world space and projected on both cameras. Colors are sampled with mipmapping in the corresponding color textures, and the distance between both patches is computed using a mean-subtracted Sum of Absolute Difference (SAD) cost function, where the distance DSAD is computed as where n is the number of samples, p is each sample from the first image with corresponding mean value m r . and q is each sample from the second image with corresponding mean value p q . This cost function is chosen since it is robust to local lighting conditions while still being cheap to compute, since an approximate mean value can be sampled by using tile mipmap hierarchy. The position with the best score along the ray is written to the position buffer.

Uniqueness

In the marching step, no threshold is used on the matching score and the best matching point along the ray is taken. This gives rise to invalid geometry in areas where only one of the cameras has visibility. In the second step, this problem is addressed by identifying these problematic areas by projecting the obtained position buffer from the first camera onto the second camera and then do stereo matching along the second camera’s view rays for the projected points. If the best match along this ray does not closely match the best match from the first camera, the point is discarded.

Normals Up until this step (for the first hierarchical level), no normal information for the geometry has been used, and the filters used have been oriented with the z-plane of its corresponding camera. Now that a first initial guess of the geometry is obtained, the normals can be estimated. This may be done using a simple average of computed normals using the cross product of neighbors of each position in the buffer.

Regularization

The raw stereo matching is noisy, and one may want to enforce that a continuous surface is reconstructed. This can be done in a regularization step, where the current view ray is intersected with planes constructed from each of the neighboring points and normals, see fig. 6. An edge-preserving weighted average of these intersections is computed as a new position along the ray. With t n being the parameter along the ray corresponding to an intersection between the current view-ray r v and a plane constructed with a neighboring point P n and corresponding normal A/it provides for each neighbor n. A weight w n can be computed which is the inverse absolute distance between this point along the ray and the corresponding point along the ray for the current point Po as

The new point P along the ray can then be computed as for all neighboring positions.

In this step basic hole-filling may also be done if a point has many valid neighbors, and pruning, if a point has very few valid neighbors. After this step, the normals are recomputed in the same manner as discussed above.

Interpolation

The last step of the stereo reconstruction method is to interpolate positions and normals for the next level in the hierarchy. This is done in a separable method, first horizontally and then vertically in the view-aligned geometry buffers. If there are two valid neighbors, the new point is the average of these two points projected on the current view ray. If there is only one valid neighbor, the intersection of the view ray with a plane constructed from the neighbor may be used, similarly as discussed above.

Higher levels of the hierarchy

The next levels of the hierarchy are executed in the same manner, doubling the resolution (see above) for each iteration until the highest hierarchical level has finished. A few differences from the first hierarchical level should however be noted. There are now estimated normals in each step, and the filters used for stereo matching are thus oriented in the corresponding direction. There is no longer a fixed raymarch length for the raymarching, but the range is instead computed before starting stereo matching for the current hierarchical level. The range is computed as double the maximum distance from the current position to the intersection of the current view-ray with planes constructed by all neighboring positions and their corresponding normals. This gives us a rough estimate of the curvature of the local geometry and adapts the search range accordingly. Finally, since the uniqueness step is very costly, equally expensive as the actual marching step, it may only be performed in the first hierarchical level. This should, however, not lead to invalid points being introduced again, since these will not be rematched further up the hierarchy.

Temporal Denoising

Immediately following the stereo reconstruction, one may employ a simple averaging scheme to suppress noise in the computed geometry buffers. The pipeline is delayed with N frames, and then creates an average filter over the view space positions with N frames forward in time and N frames backwards in time, centered around the current frame. Averaging in the case of the current frame's positions being invalid may also be done. This has a hole filling effect that mitigates flickering triangles along silhouettes. After this step, the normals may be recomputed to match the new geometry.

Mesh alignment For each camera pair, there is now view-space geometry buffers. In offline algorithms, these buffers would now be merged in a global surface reconstruction step using e.g. Poisson Surface Reconstruction, along with further refinement steps. For performance reasons, in this embodiment of the present invention a simpler approach is used. Each set of position buffers and normals are triangulated into meshes using a simple grid-based algorithm. Since these meshes are computed independently for each camera pair, they will not have perfect overlap, and small gaps can be seen when the meshes are overlaid on top of each other. To remedy this, a non-rigid alignment algorithm is used that merges the meshes together in the overlapping parts.

The quality of the meshes will reflect the quality of the stereo reconstruction and is thus dependent on well textured areas that are visible in both cameras. The quality will generally be worst at silhouettes, where the geometry has a high angle towards the camera and the visibility for the whole projected matching filter is not guaranteed for both cameras. Following this argument, the heuristic that is used is that the quality of the mesh is lower closer to the silhouette of each mesh, and those areas may be aligned to an overlapping mesh whose local area is further away from its corresponding silhouette.

Consequently, there may be a need to determine how close it is to the silhouette for each point in the mesh. Optimally, one would like to compute the actual geodesic distance along the mesh, but for performance reasons one may instead compute the distance in pixels for the projected mesh. This computation is done in screen space with a GPU version of the flood-fill algorithm, called jump flooding. First, computing the location of all silhouette points in a single pass is done by looking at the validity of neighboring pixels for each pixel. The pixel locations of these silhouettes are then propagated to every other pixel in the projected mesh and stored in a distance map. If a propagated silhouette location is closer to the current pixel than what was previously written in the distance map, or if it did not have any previous value, it is updated with this new location. When finished, the distance map contains the screen-space coordinates of the silhouette closest to that particular screen-space location, for each projected pixel in the mesh.

The mesh-alignment algorithm uses an iterative approach, where each mesh is considered in relation to the other meshes one by one. For each camera for which a mesh has been generated (i.e. one per each pair of cameras), all other meshes are rendered from its view to get depth maps, and a distance to silhouette map is computed for it. For each position in the current mesh, a screen-space raymarching is then done in the normal direction in these depth maps, to see if there is an intersection with any of the other meshes. If there is, a vector in the direction of that intersection is stored (or average of intersections if there are several) in an output texture. The length of this vector is weighted with the distance to the closest silhouette as read from the corresponding distance map. These movement vectors for each mesh vertex are averaged with their neighbors in a following pass and are then applied to each corresponding position. The final result is a mesh that is aligned to the other meshes at overlapping locations. The vertex normals are then updated to correspond to this new aligned geometry. This whole process is then repeated for each mesh and can also be executed iteratively for all meshes until the desired level of alignment is achieved.

Final view generation

The purpose of the pipeline according to the described embodiments is to generate a view of the scene from the perspective of a virtual camera controlled by the user. A final step may thus be to compute this view using a reconstructed geometry, the set of aligned meshes, and the captured camera images. The meshes are rendered to the current virtual camera position into separate geometry buffers. To weight the colors together, a distance-to- silhouette map is computed for each rendered mesh with the same method as described above.

Mesh visibility

Since the meshes are overlapping, there is a need to decide which positions that belong to the same surface if there are fragments from more than one mesh rendered to a pixel. These fragments could belong to samples of the same surface, but could also belong to other surfaces now occluded by this front-most surface. In the geometry buffers for each mesh, the closest front-facing surface is rendered but also the closest back-facing surface. If view-samples are sorted according to depth, it can be found if any such back- facing samples are in between two front-facing ones. In that case it may be known that the subsequent samples surely belong to another part of the geometry and can be discarded. Also, a simple threshold may be employed to discern between surfaces to help with situations where imperfect geometry prevents using this scheme. If the distance between two front-facing surfaces is greater than this threshold, they are considered different surfaces and only the closest is kept.

Color blending

The fragments that have been deemed visible from the current view can now be projected on to its two corresponding camera images, where the final colors are sampled. These colors, sampled from one or many camera pairs, are weighted together using a linear combination of two weights w^ h and )v,n for each sample n. The first weight uj d,n is sampled directly from the previously computed distance map and ensures that smoothly varying colors on the overlapping edges between meshes are achieved. The second weight )v,n captures view dependence and is computed as where vo , n is the view space position of the fragment n, and v c,n is its corresponding recording camera position. Using these weights for each color sample the final color C can be computed as where f is a parameter between 0 and 1 and c n is each color sample.

Fig. 7 illustrates a system according to an embodiment of the present invention. A 3D image generation device 5 is connected to RGB camera pairs

3 directed towards the object 2. In the illustrated embodiment, a green screen

4 is used behind the object 2 to facilitate the image processing. The 3D image generation device 5 comprises a receiving unit for receiving the image input from the cameras, and a processing unit configured to perform a method according to the embodiments described above.

In an exemplary embodiment, the method may be performed with the parameters as seen in fig. 8. The performance of further embodiments is shown in figs. 9a-d.

In the fig. 9d, there is a performance breakdown of the main method steps of the pipeline according to one embodiment, for 100 frames using full resolution and the setup of four camera pairs, in the order that the pipeline steps are employed for each frame. The timings reported are the relevant OpenGL and CUDA times, since they dominate the pipeline.

The image preprocessing step is cheap and contains undistortion, foreground segmentation and conversion to monochrome color, as described above. The visual hull generation step contains the creation of silhouette cones and the projection of this geometry onto each camera to constrain the search space for stereo reconstruction, as described above. The expensive part of this step is the rendering of each camera's silhouette cones from each other camera, since it has quadratic complexity with respect to the number of cameras.

Next there is the stereo reconstruction implemented in CUDA as described above, reported for each hierarchical level. This step is highly optimized and does not take up a significant amount of time in the pipeline. The difference in computation time between hierarchical levels is small and does not increase in proportion to the higher resolution, since we adjust the number of steps taken along the ray when going up the hierarchy. The stereo marching step is dominated by the high number of texture lookups required when matching filters and is thus bottlenecked by the available bandwidth to memory, and how well the caches are utilized. There may be used an implementation where one parallelizes across marching step to ensure that there are enough threads in flight even for low resolutions. The temporal denoising step, see above, is insignificant in terms of computation. However, if the averaging filter width is I N, it requires that the whole pipeline is delayed with a number of frames equal to (W - 1) /2.

The last two steps before the final view generation is to triangulate meshes from the geometry buffers and to employ the non-rigid mesh alignment algorithm, as described above. The first of these steps is very cheap, but the other comes at a higher cost, since similarly to the visual hull generation step this step is quadratic in terms of the number of cameras employed.

The lowest area of the graph represents the time it takes to do final view generation as described above. This includes a distance-to-silhouette computation for each projected mesh, computation of mesh visibility and color blending of sampled colors.

In figs. 9a-c, the two most important hyper parameters of our pipeline are varied: the number of cameras used and the resolution of the geometry. It can be seen here that for most steps of the pipeline, the number of cameras is the most expensive parameter. All steps increase their processing time when the number of cameras is increased. Most steps increase linearly, but two steps, visual hull generation and align meshes, increase quadratically.

The resolution of the geometry affects the pipeline more moderately, mostly by adding one more hierarchical level to the stereo reconstruction. It can also be noted that align meshes becomes more expensive with higher resolution.

In general for figs. 9a-d, it can be seen that the performance is stable and only varies slightly from frame to frame. It can also be noted that discounting the top area of the graph representing texture uploads to the GPU, all graphs are well below the 30Hz frame rate of the used webcams. When four cameras are used, the results are also well below a potential 60Hz camera source.

Reconstruction Quality

As described above, the disclosed mesh reconstruction method is a combination of a number of techniques. Fig. 10 shows how the results improve with each added technique. A first barebones implementation of our stereo reconstruction can be seen in fig. 10a, where it is marched along the view ray for each pixel, in full resolution, stripping away everything else from the pipeline. The intersection of each camera's view frustum is used to constrain the search. As can be seen in fig. 10b, the results can be improved by using the silhouette information available in the input videos to constrain the search region. This also reduces the number of raymarching steps required. Next, in fig. 10c, it can be seen the results of applying regularization to the noisy results as described above. This smooths the results when neighboring matches are close to the true surface, but outliers will still cause disturbing artifacts. Therefore, the uniqueness test is applied as described above, prior to regularization, to remove outliers (see fig. 10d). In fig. 10e, it is shown that averaging the results of the current frame with a few previous and upcoming frames may further improve the results.

At this point, the obtained matches are mostly close to the true surface but, since only very local regularization has been performed, there are still many high-frequency artifacts where remaining erroneous matches can be found. Fig. 10f shows the results of instead reconstructing the mesh hierarchically. Here, a low resolution mesh is constructed first, using all the steps described above. Then, higher resolution meshes are obtained by iteratively repeating the process with a shorter search range around the results obtained from the previous hierarchical step. The hierarchical method also means that an estimation is provided of the surface's normal which improves the results of raymarching and regularization in each step. The final resulting mesh from one camera pair is shown in fig. 10g.

One mesh is reconstructed per camera pair and all meshes are rendered as seen from the novel viewpoint to obtain the final image. Although the meshes obtained with the method are usually a locally smooth and plausible estimation of the true surface individually, imperfections are easily spotted when the meshes are combined. It may thereby be attempted to align the meshes (as discussed above in terms of mesh alignment). Vertices near edges can be pushed closer to the nearby surface of another mesh to obtain a more coherent result. Finally, the final view is computed using a color weighting scheme (color blending).

In the drawings and specification, there have been disclosed preferred embodiments and examples of the invention and, although specific terms are employed, they are used in a generic and descriptive sense only and not for the purpose of limitation, the scope of the invention being set forth in the following claims.